qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,707,675 | <p>How can I find the indefinite integral which is $$\int \frac{\ln(1-x)}{x}\text{d}x$$</p>
<p>I tried to use substitution by assigning $$\ln(1-x)\text{d}x = \text{d}v $$ and $$\frac{1}{x}=u$$ but, it is meaningless but true, the only thing I came up from integration by part is that $$\int \frac{\ln(1-x)}{x^2}\text{d}x = foo $$ and that has no help for me to find the integration $$\int \frac{\ln(1-x)}{x}\text{d}x$$</p>
| Robert Israel | 8,508 | <p>This is a "well-known" special function: $$\int \dfrac{\ln(1-x)}{x} \; dx = - \text{dilog}(1-x) $$
It is (provably) not an elementary function. In particular, there is no closed-form expression for it in terms of the functions familiar to the typical calculus student.</p>
|
97,131 | <p>I have the following problem:</p>
<p>I have a convex hull $\Omega$ defined by a set of n-dimensional hyperplanes $S = [(n_1,d_1), (n_2,d_2),...,(n_k,d_k)]$ such that a point $p \in \Omega$ if $n_i^T p \geq d_i \quad \forall (n_i,d_i) \in S $. Now I have a "joining" hyperplane $(n_{k+1},d_{k+1})$ and I want to know if this hyperplane "modifies the shape" of the convex hull and in that case, which hyperplanes of $S \bigcup (n_{k+1},d_{k+1})$ are not necessary anymore because they become redundant.</p>
<p>Trivial example with one dimension:</p>
<p>My convex hull is described by the inequality $ 3 \leq x \leq 5$ so </p>
<p>$S = [(1,3),(-1,-5)]$</p>
<p>The joining hyperplane is the inequality $ x \geq 4$ so the resulting convex hull should be
$ 4 \leq x \leq 5$</p>
<p>$S = [(1,4),(-1,-5)]$</p>
<p>returning $(1,3)$.</p>
<p>Now I would like the same thing generalized for n-dimensions. I can get algorithms till 3 dimensions but they are not generalized.</p>
<p>Do you have any hints or pointers on how I can find a solution to this problem?</p>
<p>p.s. I apologize for the sloppy description, I am not a mathematician. Please feel free to ask for more details.</p>
<p>Kind regards.</p>
| Yoav Kallus | 20,186 | <p>There are two ways to represent convex polytopes: as the convex hull of its vertices, or as the intersection of the half-spaces whose boundaries contain the faces. If you store both of these representations, checking if a new constraint is redundant is easy: if all current vertices satisfy it, then so do all points in the convex hull. Now, the problem is (a) how large is the vertex representation? -- in general it can be exponential with the number of constraints -- and (b) how to update the vertex representation in case a new constraint is relevant. It might be more efficient in your situations, where constraints are added one by one, to store the vertex representation, but it might not, depending on the situation.</p>
<p>P.S. Also note that by duality, the problem you describe is equivalent to checking whether a new point lies in the convex hull of a set of old points. So as you search the literature, you might have more luck finding the dual version treated.</p>
|
48,746 | <p>Let's assume that I have some particular signal on the finite time interval which is described by function <span class="math-container">$f(t)$</span>. It could be, for instance, a rectangular pulse with amplitude <span class="math-container">$a$</span> and period T; Gauss function with <span class="math-container">$\sigma$</span> and <span class="math-container">$a$</span> or something else. </p>
<p>Now I need to generate a signal which consists from randomly appearing functions <span class="math-container">$f(t)$</span> with random parameters (random parameters should be random in some specified range). If <span class="math-container">$f(t)$</span> is a rectangular function, the generated signal should consist from randomly generated rectangles appearing on random moments of time (rectangles should not overlap).
Can anyone suggest what is the best way to do it in Mathematica?</p>
<p>Example:<img src="https://i.stack.imgur.com/5Xs78.png" alt="rectangles"></p>
| Community | -1 | <p>Sounds like what you actually need after your edit is a way to smooth a list of data while keeping the endpoints fixed. Here's a dumb approach that will work with any "symmetrical" smoothing filter, including <code>GaussianFilter</code>, <code>MeanFilter</code>, even <code>MedianFilter</code>. It won't work with <code>ExponentialMovingAverage</code>, though, because that's not symmetrical, although it should if you average the results from <code>ExponentialMovingAverage</code> and <code>Reverse@ExponentialMovingAverage@Reverse</code>.</p>
<pre><code>smooth[list_, filter_] :=
Take[filter[Join[
(2 First@list - #) & /@ Reverse@Rest@list,
list,
(2 Last@list - #) & /@ Reverse@Most@list]],
{Length@list, 2 Length@list - 1}]
</code></pre>
<p>All it does is it extends the data in a "flipped" form about both endpoints -- for example, $[1,2,10]$ will become $[\color{grey}{-8,0},1,2,10,\color{grey}{18,19}]$ -- then smooths <em>that</em>, and drops the extra entries.</p>
<p>For example:</p>
<pre><code>smooth[{a, b, c, d, e}, GaussianFilter[#, 1] &] // Simplify
</code></pre>
<blockquote>
<pre><code>{a,
(b BesselI[0, 1/4] + (a + c) BesselI[1, 1/4])/(BesselI[0, 1/4] + 2 BesselI[1, 1/4]),
(c BesselI[0, 1/4] + (b + d) BesselI[1, 1/4])/(BesselI[0, 1/4] + 2 BesselI[1, 1/4]),
(d BesselI[0, 1/4] + (c + e) BesselI[1, 1/4])/(BesselI[0, 1/4] + 2 BesselI[1, 1/4]),
e}
</code></pre>
</blockquote>
<p>:)</p>
<hr>
<p>Turns out <code>GaussianSmooth</code> smooths across all dimensions of the array by default, so the $x$-coordinate gets averaged with the $y$-coordinate and vice versa. Oops.</p>
<pre><code>{xs, ys} = smooth[#, GaussianFilter[#, 5] &] & /@ Transpose[points];
{fx, fy} = Interpolation /@ {xs, ys};
RevolutionPlot3D[{fx[i], fy[i]}, {i, 1, Length[points]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/i596Y.png" alt="enter image description here"></p>
<p>You can set <code>smoothedPoints = Transpose@{xs, ys}</code> if you want to use the whole <code>parametrizeCurve</code> stuff instead.</p>
|
3,418,810 | <p>So my question is what does it mean to be <span class="math-container">$0$</span> in <span class="math-container">$S^{-1} M$</span>, where <span class="math-container">$S$</span> is a multi-closed subset of a ring <span class="math-container">$A$</span>, <span class="math-container">$M$</span>, lets assume to be a finitely generated <span class="math-container">$A$</span> module.</p>
<p>I was reading Atiyah Macdonalds book on commutative algebra. From what I gather, <span class="math-container">$S^{-1} M$</span> is a set the fractions of the form <span class="math-container">$\frac{m}{s}$</span>. So I was wondering whats does the <span class="math-container">$0$</span> fraction, <span class="math-container">$``\frac{0}{s}"$</span> looks like. I tried going back to the definition of his construction, but cant really get a good idea.</p>
<p>Any help or insight is deeply appreciated.</p>
| egreg | 62,967 | <p>The idea is to make equivalence classes from pairs <span class="math-container">$(m,s)$</span> with <span class="math-container">$m\in M$</span> and <span class="math-container">$s\in S$</span> and denote the equivalence class of <span class="math-container">$(m,s)$</span> by <span class="math-container">$m/s$</span>. We need to ensure that multiplying numerator and denominator by the same element of <span class="math-container">$S$</span> doesn't change the equivalence class, so <span class="math-container">$(mt)/(st)$</span> should be the same as <span class="math-container">$m/s$</span>.</p>
<p>But when should <span class="math-container">$m/s=n/t$</span>? It should be so when <span class="math-container">$mt=ns$</span>, but it turns out that this is insufficient to ensure an equivalence relation: this is just a sufficient condition to put <span class="math-container">$(m,s)$</span> and <span class="math-container">$(n,t)$</span> in the same equivalence class. On the other hand, we should have
<span class="math-container">$$
\frac{m}{s}=\frac{mu}{su},\qquad \frac{n}{s}=\frac{nu}{tu}
$$</span>
for every <span class="math-container">$u\in S$</span>. It turns out that defining
<span class="math-container">$$
(m,s)\sim(n,t) \quad\text{if and only if}\quad mtu=nsu \text{ for some } u\in S
$$</span>
makes <span class="math-container">$\sim$</span> into an equivalence relation. Defining
<span class="math-container">$$
\frac{m}{s}+\frac{n}{t}=\frac{mt+ns}{st},\qquad \frac{m}{s}\frac{r}{t}=\frac{mr}{st}
$$</span>
does not depend on the representatives of the equivalence classes and makes <span class="math-container">$S^{-1}M$</span> (the quotient set) into a module over <span class="math-container">$S^{-1}R$</span> (with the similar definitions for the ring structure).</p>
<p>Clearly, for every <span class="math-container">$s\in S$</span>, we need <span class="math-container">$0/s$</span> to be the zero element in <span class="math-container">$S^{-1}M$</span>. By the very definition, then
<span class="math-container">$$
\frac{m}{s}=\frac{0}{t}
$$</span>
if and only if <span class="math-container">$mtu=0su=0$</span>, for some <span class="math-container">$u\in S$</span>. But then we see that it's equivalent to say that <span class="math-container">$mu=0$</span>, for some <span class="math-container">$u\in S$</span>. One direction has been shown, as <span class="math-container">$tu\in S$</span>; for the other direction
<span class="math-container">$$
\frac{m}{s}=\frac{mu}{su}=\frac{0}{su}
$$</span>
is the zero element.</p>
|
4,351,497 | <p>Northcott Multilinear Algebra poses a problem. Consider R-modules <span class="math-container">$M_1, \ldots, M_p$</span>, <span class="math-container">$M$</span> and <span class="math-container">$N$</span>. Consider multilinear mapping</p>
<p><span class="math-container">$$
\psi: M_1 \times \ldots \times M_p \rightarrow N
$$</span></p>
<p>Northcott calls the universal problem as the problem to find <span class="math-container">$M$</span> and multilinear mapping <span class="math-container">$\phi: M_1\times \ldots \times M_p \rightarrow M$</span> such that there is exactly one R-module homomorphism <span class="math-container">$h: M\rightarrow N$</span> such that <span class="math-container">$h \circ \phi = \psi$</span>.</p>
<p>Northcott claims that if <span class="math-container">$(M, \phi)$</span> and <span class="math-container">$(M', \phi')$</span> both solve the universal problem then</p>
<blockquote>
<p>In this situation there will exist unique R-homomorphisms <span class="math-container">$\lambda: M\rightarrow M'$</span> and <span class="math-container">$\lambda': M' \rightarrow M$</span> such that <span class="math-container">$\lambda \circ \phi = \phi'$</span> and <span class="math-container">$\lambda \circ \phi' = \phi$</span>.</p>
</blockquote>
<p>If <span class="math-container">$\lambda$</span> and <span class="math-container">$\lambda'$</span> exist I understand why the equalities at the end of the sentence follow, based on the satisfaction of the universal problem. I can't see however why homomorphisms <span class="math-container">$\lambda$</span> and <span class="math-container">$\lambda'$</span> should exist.</p>
<p>I did more group theory many years ago and this is my first serious foray into "modules" so I wouldn't be surprised if there is something obvious I'm missing.</p>
<p>my thoughts:
Clearly <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> are both homomorphic to <span class="math-container">$N$</span> through <span class="math-container">$h$</span> and <span class="math-container">$h'$</span>, I'm not sure if this says anything about a relationship between <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> though.</p>
<p>If <span class="math-container">$h'$</span> were injective I could say something like <span class="math-container">$\lambda(m) = h'^{-1}(h(m))$</span> but I don't know if there is any guarantee that <span class="math-container">$h'$</span> is injective..</p>
<p>Likewise, if <span class="math-container">$\phi$</span> were injective I could define <span class="math-container">$\lambda(m) = \phi'(\phi^{-1}(m))$</span> but again I don't know why this would be the case...</p>
<p>I've tried replacing <span class="math-container">$M$</span> and <span class="math-container">$N$</span> with more familiar vector spaces and R-module homomorphisms by multilinear maps for better intuition but no luck.. I do know that if <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> are vector spaces with the same dimension then there is an isomorphism between them. I guess more generally if <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> have different dimensions (say <span class="math-container">$\text{dim}(M') > \text{dim}(M)$</span>) then there is a homomorphism from <span class="math-container">$M$</span> into a subspace of <span class="math-container">$M'$</span> and another homormophism from <span class="math-container">$M'$</span> onto <span class="math-container">$M$</span>. Maybe this carries over to modules and is in the right direction for what I need...?</p>
| Jan Eerland | 226,665 | <p>Well, we are trying to find:</p>
<p><span class="math-container">$$\text{y}_\text{k}\left(\text{n}\space;x\right):=\mathscr{L}_\text{s}^{-1}\left[-\sqrt{\frac{\text{k}}{\text{s}}}\cdot\exp\left(-\text{n}\cdot\sqrt{\frac{\text{s}}{\text{k}}}\right)\right]_{\left(x\right)}\tag1$$</span></p>
<p>Using the linearity of the inverse Laplace transform and the convolution property:</p>
<p><span class="math-container">$$\text{y}_\text{k}\left(\text{n}\space;x\right)=\sqrt{\text{k}}\cdot\int_x^0\mathscr{L}_\text{s}^{-1}\left[\exp\left(-\text{n}\cdot\sqrt{\frac{\text{s}}{\text{k}}}\right)\right]_{\left(\sigma\right)}\cdot\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\sqrt{\text{s}}}\right]_{\left(x-\sigma\right)}\space\text{d}\sigma\tag2$$</span></p>
<p>It is well known and not hard to prove that:</p>
<ul>
<li><span class="math-container">$$\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\sqrt{\text{s}}}\right]_{\left(x-\sigma\right)}=\frac{1}{\sqrt{\pi}}\cdot\frac{1}{\sqrt{x-\sigma}}\tag3$$</span></li>
<li><span class="math-container">$$\mathscr{L}_\text{s}^{-1}\left[\exp\left(-\text{n}\cdot\sqrt{\frac{\text{s}}{\text{k}}}\right)\right]_{\left(\sigma\right)}=\frac{\text{n}\exp\left(-\frac{\text{n}^2}{4\text{k}\sigma}\right)}{2\sqrt{\text{k}\pi}\sigma^\frac{3}{2}}\tag4$$</span></li>
</ul>
<p>So:</p>
<p><span class="math-container">$$\text{y}_\text{k}\left(\text{n}\space;x\right)=\frac{\text{n}}{2\pi}\int_x^0\frac{\exp\left(-\frac{\text{n}^2}{4\text{k}\sigma}\right)}{\sigma^\frac{3}{2}}\cdot\frac{1}{\sqrt{x-\sigma}}\space\text{d}\sigma\tag5$$</span></p>
|
194,191 | <p>Test the convergence of $\int_{0}^{1}\frac{\sin(1/x)}{\sqrt{x}}dx$</p>
<p><strong>What I did</strong></p>
<ol>
<li>Expanded sin (1/x) as per Maclaurin Series</li>
<li>Divided by $\sqrt{x}$</li>
<li>Integrate</li>
<li>Putting the limits of 1 and h, where h tends to zero</li>
</ol>
<p>So after step 3, I get something like this:</p>
<p>$S= \frac{-2}{\sqrt{x}}+\frac{2}{5\cdot 3! x^{5/2}}- \frac{2}{9 \cdot 5!x^{9/2}}+\frac{2}{13\cdot 7!x^{13/2}}-...$
Putting Limits:
$I=S(1)-S(0)$
But I am stuck at calculating $S(0)$</p>
| DonAntonio | 31,254 | <p>$$y:=\frac{1}{x}\Longrightarrow dy=-\frac{dx}{x^2}\Longrightarrow \int_0^1\frac{\sin 1/x}{x}\,dx=\int_\infty^1\frac{\sin y}{1/y}\left(-\frac{dy}{y^2}\right)=$$</p>
<p>$$=\int_1^\infty\frac{\sin y}{y}\,dy$$</p>
<p>And since </p>
<p>$$\int_0^\infty\frac{\sin x}{x}\,dx=\frac{\pi}{2}$$</p>
<p>we're done</p>
|
2,600,679 | <p>Provided two real number sequences: $a_1,a_2,...,a_n$;$b_1,b_2,...,b_n$, define their means respectively:
$$\bar a=\frac{1}{n}\sum_{i=1}^n a_i,\bar b=\frac{1}{n}\sum_{i=1}^n b_i$$
and define their variances and covariance respectively:
$$var(a)=\frac{1}{n}\sum_{i=1}^n (a_i-\bar a)^2,var(b)=\frac{1}{n}\sum_{i=1}^n (b_i-\bar b)^2,cov(a,b)=\frac{1}{n}\sum_{i=1}^n (a_i-\bar a)(b_i-\bar b)$$
naturally leads to the definition of normalized cross correlation:
$$NCC=\frac{cov(a,b)}{\sqrt{var(a)var(b)}}=\frac{\sum_{i=1}^n(a_i-\bar a)(b_i-\bar b)}{\sqrt{\sum_{i=1}^n (b_i-\bar b)^2 \sum_{i=1}^n (a_i-\bar a)^2}}$$
Now how to show that $NCC$ lies in $[-1,1]$?</p>
| Community | -1 | <p>Let $$y(x)=\sum\frac{x^j}{j!^2}.$$</p>
<p>We have
$$y'(x)=\sum\frac{x^{j-1}}{j!(j-1)},$$</p>
<p>$$xy'(x)=\sum\frac{x^j}{j!(j-1)!},$$
and
$$(xy')'(x)=\sum\frac{x^{j-1}}{(j-1)^2}=y(x).$$</p>
<p>This finally leads us to the differential equation</p>
<p>$$xy''+y'-y=0.$$</p>
<p>By a change of variable $t=2\sqrt x$, we can convert it to the modified Bessel type (of order $0$):</p>
<p>$$t^2y''+ty'-t^2y=0.$$</p>
<p><a href="https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I%CE%B1,_K%CE%B1" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I%CE%B1,_K%CE%B1</a></p>
<p>Then from the initial condition $y(0)=1$, we can infer</p>
<p>$$y(x)=I_0(2\sqrt x).$$</p>
<p><a href="https://i.stack.imgur.com/LXCl1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LXCl1.png" alt="enter image description here"></a></p>
<p><a href="https://www.wolframalpha.com/input/?i=plot+I_0(2sqrt+x)+from+0+to+10" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=plot+I_0(2sqrt+x)+from+0+to+10</a></p>
|
987,054 | <p>Prove that the sequence
$$b_n=\left(1+\frac{1}{n}\right)^{n+1}$$
Is decreasing.</p>
<p>I have calculated $b_n/b_{n-1}$ but it is obtain:
$$\left(1-\frac{1}{n^2}\right)^n \left(1+\frac{1}{n}\right)^n$$
But I can't go on.</p>
<p>Any suggestions please?</p>
| orangeskid | 168,051 | <p>My 2¢: consider the function defined apriori for $x>0$
$$f(x)=\log(1+x)\cdot (\frac{1}{x}+1)= \frac{\log(1+x)\cdot (1+x)}{x}$$</p>
<p>$f$ extends analytically to $(-1, \infty)$, and continuously to $[-1, \infty)$. We have $f(-1)=0$ and $f(0)=1$. </p>
<p>We calculate: $$f'(x)= \frac{x - \log(1+x)}{x^2}$$
so $f'(x)>0$ for $x> -1$, $x \ne 0$ and so $f$ strictly increasing on $(-1, \infty)$. Now consider the decreasing sequence of values $\frac{1}{n}$ for $x$.</p>
|
2,618,675 | <p>I've seen the nice proof of this using spheres, but I'm looking for a way to prove it parametrically if possible. Using a cylinder $x^2+y^2=r^2$ and a plane $ax+by+cz+d=0$ I got:</p>
<p>$x=r\cos(\theta), y=r\sin(\theta), z=\dfrac{-ar\cos(\theta)+br\sin(\theta)+d}{c}$</p>
<p>But after this I'm stuck trying different projections and messing with ellipse definitions </p>
| Mathematical | 524,351 | <p>Without loss of generality let's assume the cylinder has radius $1$, so that it has equation $x^2 + y^2 = 1$. Assuming that the plane is not parallel to the cylinder, we can always rearrange the coordinate system so that the plane goes through the origin, or even better, make the plane go through the $x$-axis after a suitable rotation. Now the plane should have equation $z = y \tan\alpha$ (with $\alpha$ being the slope in the $yz$-plane).</p>
<p>Now your parametrization gives the curve
$$x = \cos\theta, \quad y = \sin\theta, \quad z=\sin\theta\tan\alpha.$$
We think of a rotation (of the whole space) around the $x$-axis, through an angle of $-\alpha$ in the $yz$-plane:
$$\begin{aligned}
\begin{bmatrix} x \\ y \\ z \end{bmatrix}
& \mapsto
\begin{bmatrix}
1 & 0 & 0 \\
0 & \cos(-\alpha) & -\sin(-\alpha) \\
0 & \sin(-\alpha) & \cos(-\alpha)
\end{bmatrix}
\begin{bmatrix} x \\ y \\ z \end{bmatrix} \\
& =
\begin{bmatrix}
x \\
y\cos\alpha + z\sin\alpha \\
-y\sin\alpha + z\cos\alpha
\end{bmatrix}.
\end{aligned}$$
Under this rotation the curve becomes
$$\begin{aligned}
\begin{bmatrix} \cos\theta \\ \sin\theta \\ \sin\theta\tan\alpha \end{bmatrix}
& \mapsto
\begin{bmatrix}
\cos\theta \\
\sin\theta\cos\alpha + \sin\theta\tan\alpha\sin\alpha \\
-\sin\theta\sin\alpha + \sin\theta\tan\alpha\cos\alpha
\end{bmatrix} \\
& =
\begin{bmatrix}
\cos\theta \\
\sin\theta\cos\alpha + \sin\theta\sin^2\alpha/\cos\alpha \\
-\sin\theta\sin\alpha + \sin\theta\sin\alpha
\end{bmatrix} \\
& =
\begin{bmatrix}
\cos\theta \\
\sin\theta/\cos\alpha \\
0
\end{bmatrix}.
\end{aligned}$$
This is
$$x = \cos\theta, \quad y = \frac{1}{\cos\alpha}\sin\theta, \quad z\equiv0,$$
which is an ellipse.</p>
|
2,618,675 | <p>I've seen the nice proof of this using spheres, but I'm looking for a way to prove it parametrically if possible. Using a cylinder $x^2+y^2=r^2$ and a plane $ax+by+cz+d=0$ I got:</p>
<p>$x=r\cos(\theta), y=r\sin(\theta), z=\dfrac{-ar\cos(\theta)+br\sin(\theta)+d}{c}$</p>
<p>But after this I'm stuck trying different projections and messing with ellipse definitions </p>
| amd | 265,466 | <p>W.l.o.g. we can take the cylinder to have unit radius and the cutting plane to be the $x$-$y$ plane rotated through an angle of $\theta$ about the $x$-axis. This rotation is represented by the homogeneous transformation matrix $$R = \begin{bmatrix} 1&0&0&0 \\ 0& \cos\theta & -\sin\theta &0 \\ 0& \sin\theta & \cos\theta &0 \\ 0&0&0&1 \end{bmatrix}.$$ The cutting plane is represented by the homogeneous vector $\mathbf\pi = [0:-\sin\theta:\cos\theta:0]$ (these are just the coefficients of a point-normal equation of the plane). The curve of intersection can be generated by projecting the unit circle in the $x$-$y$ plane parallel to the cylinder’s axis—the $z$-axis—onto the cutting plane. If we let $\mathbf p = [\cos t:\sin t:0:1]$ be a point on the circle and $\mathbf q=[0:0:1:0]$ be the point at infinity on the $z$-axis, the projection of $\mathbf p$ onto $\pi$ is the intersection of the line $\overline{\mathbf p\mathbf q}$ with $\mathbf\pi$: $$\mathbf p' = (\mathbf\pi^T \mathbf q)\mathbf p-(\mathbf\pi^T \mathbf p)\mathbf q = [\cos\theta \cos t : \cos\theta \sin t : \sin\theta \sin t : \cos\theta].$$ Now rotate this by $R^{-1}=R^T$ to bring it back onto the $x$-$y$ plane. This produces $[\cos\theta \cos t : \sin t : 0 : \cos\theta]$, which is the point with inhomogeneous coordinates $\left(\cos t,\sec\theta \sin t,0\right)$. If you don’t recognize this as the parametrization of an ellipse with semimajor axis length $\sec\theta$ and semiminor axis length $1$, you can eliminate $t$: $$x^2 + y^2\cos^2\theta = \cos^2t+\sin^2t = 1.$$</p>
|
70,582 | <p>For which n can $a^{2}+(a+n)^{2}=c^{2}$ be solved, where $a,b,c,n$ are positive integers?
I have found solutions for $n=1,7,17,23,31,41,47,79,89$ and for multiples of $7,17,23$...
Are there infinitely many prime $n$ for which it is solvable? </p>
| poetasis | 546,655 | <p>There are infinitely many Pythagorean triples where the difference between two legs is either <span class="math-container">$1$</span> or any of the infinite prime numbers <span class="math-container">$P\equiv\pm 1 \mod 8\quad$</span> taken to any non-zero power.</p>
<p>Under <span class="math-container">$100\quad
P\in \{1,7, 17, 23, 31, 41, 47, 49, 71, 73, 79, 89, 97\}.\quad $</span>
To find these, we begin with Euclid's formula
<span class="math-container">$ \quad A=m^2-k^2,\quad B=2mk,\quad C=m^2+k^2\quad$</span> solve the <span class="math-container">$\quad (B-A)$</span> difference equation for <span class="math-container">$m$</span> and generate the <span class="math-container">$(m,k)$</span> values needed to feed Euclid's formula. Each iteration uses a seed <span class="math-container">$k$</span> which is either given or is the <span class="math-container">$m$</span>-value of the previous iteration. Here is the formula for <span class="math-container">$P=1$</span> with a seed <span class="math-container">$k=1$</span>. Note that <span class="math-container">$(m,k)$</span> are Pell numbers.</p>
<p><span class="math-container">\begin{equation}
\quad m=k+\sqrt{2k^2+(-1)^k}
\end{equation}</span>
<span class="math-container">\begin{align*}
k=1\space &\implies m=(1+\sqrt{2(1)^2+(-1)^1}\space)\big)=2\quad & F(2,1)=(3,4,5)\\
k=2\space &\implies m=(2+\sqrt{2(2)^2+(-1)^2}\space)\big)=5\quad & F(5,2)=(21,20,29)\\
k=5\space &\implies m=(5+\sqrt{2(5)^2+(-1)^5}\space)\big)=12\quad & F(12,5)=(119,120,169)\\
k=12\space &\implies m=(12+\sqrt{2(12)^2 (-1)^{12}}\space)\big)=29\quad & F(29,12)=(697,696,985)
\end{align*}</span></p>
<p>For <span class="math-container">$P>1$</span>, the formula is<br />
<span class="math-container">\begin{equation}
m=k+\sqrt{2k^2\pm P}
\end{equation}</span>
and there is more than one seed <span class="math-container">$(s)$</span> in a set <span class="math-container">$(k_1)$</span> each of which generates a subset of the entire set of triples for that difference. There is one more seed than the power <span class="math-container">$n$</span> of the prime number that is P. For example, <span class="math-container">$1$</span> is a zero power so there is <span class="math-container">$1$</span> seed.
<span class="math-container">$$P=1\implies k_1=\{s1\}=\{1\}\quad
P=7^1\implies k_1=\{s1,s2\}=\{2,1\}\\
P=343=7^3\implies k_1=\{s1,s2,s3, s4\}=\{14,16,3,7\}$$</span></p>
<p>The elements of <span class="math-container">$k_1$</span> are generated as follows.</p>
<p><span class="math-container">\begin{equation}
\end{equation}</span>
<span class="math-container">\begin{align*}
\text{where}\quad s_i=\sqrt{\frac{j_i^2 + P}{2}}
\\ &\text{for}\space\space\space 1 \le i \le \bigg\lceil\frac{(n+1)}{2}\bigg\rceil \space
\land \space \space 1 \le j_i \le \lfloor\sqrt{X}\rfloor\\
\text{where}\space s_i=\sqrt{\frac{j_i^2-P}{2}}
\\ &\text{for}\space \bigg\lceil\frac{(n+1)}{2}\bigg\rceil + 1 \le i \le (n+1)\space
\land \space \lfloor\sqrt{P} +1 \rfloor \le j_i \le \lfloor\sqrt{2P}\rfloor
\end{align*}</span></p>
<p>For example, we have <span class="math-container">$n=3, P=7^3=343\\$</span>
<span class="math-container">\begin{align*}
1 \le & i \le 2
\space 1 \le & j \le \lfloor\sqrt{343}\rfloor=18
\space \\
s_1=\sqrt{\frac{7^2+343}{2}}=14\space\quad & s_2=\sqrt{\frac{13^2+343}{2}}=16 \\
3 \le & i \le 4
\space 19 \le & j \le \lfloor\sqrt{2(343)}\rfloor=26
\space \\
s_3= \sqrt{\frac{19^2-343}{2}}=3 \quad
& s_4=\sqrt{\frac{21^2-343}{2}}=7 \
\end{align*}</span></p>
<p><span class="math-container">\begin{align*}
s_1= 14\quad
&14+\sqrt{2(14)^2-343}=21\quad &F(21,14)=(245,588,637)\\
&21+\sqrt{2(21)^2+343}=56\quad &F(56,21)=(2695,2352,3577)\\
s_2=16 \quad
&16 +\sqrt{2(16)^2-343}=29\quad &F(29,16)=(585,928,1097)\\
&29 +\sqrt{2(29)^2-343}=74\quad &F(74,29)=(4635,4292,6317)\\
s_3=3 \quad
&3+\sqrt{2(3)^2+343}=22\quad &F(22,3)=(475,132,493)\\
&22+\sqrt{2(22)^2+343}=47\quad &F(47,22)=(1725,2068,2693)\\
s_4=7 \quad
&7 +\sqrt{2(7)^2+343}=28\quad &F(28,7)=(735,392,833)\\
&28+\sqrt{2(28)^2+343}=63\quad &F(63,28)=(3185,3528,4753)\\
\end{align*}</span></p>
<p>For <span class="math-container">$P=1$</span>
These <span class="math-container">$(m,k)$</span>-values
may be generated directly using a Pell-related equation as follows.
<span class="math-container">\begin{equation}
m_n= \frac{(1 + \sqrt{2})^{n+1} - (1 - \sqrt{2})^{n+1}}{2\sqrt{2}}\qquad \qquad\qquad
k_n= \frac{(1 + \sqrt{2})^n - (1 - \sqrt{2})^n}{2\sqrt{2}}
\end{equation}</span>
For example</p>
<p>{\small
<span class="math-container">\begin{align*}
\frac{(1 + \sqrt{2})^{2} - (1 - \sqrt{2})^{2}}{2\sqrt{2}}=2 \quad
\frac{(1 + \sqrt{2})^1 - (1 - \sqrt{2})^1}{2\sqrt{2}}=1 \quad& F(2,1)=(3,4,5)\\
\frac{(1 + \sqrt{2})^{3} - (1 - \sqrt{2})^{3}}{2\sqrt{2}}=5 \quad
\frac{(1 + \sqrt{2})^2 - (1 - \sqrt{2})^2}{2\sqrt{2}}=2 \quad& F(5,2)=(21,20,29)\\
\frac{(1 + \sqrt{2})^{4} - (1 - \sqrt{2})^{4}}{2\sqrt{2}}=12 \quad
\frac{(1 + \sqrt{2})^3 - (1 - \sqrt{2})^3}{2\sqrt{2}}=5 \quad& F(12,5)=(119,120,169)\\
\frac{(1 + \sqrt{2})^{5} - (1 - \sqrt{2})^{5}}{2\sqrt{2}}=29 \quad
\frac{(1 + \sqrt{2})^4 - (1 - \sqrt{2})^4}{2\sqrt{2}}=12 \quad& F(29,12)=(697,696,985)
\end{align*}</span><br />
}</p>
|
3,807,708 | <p>I was asked to prove the following identity (starting from the left-hand side):
<span class="math-container">$$(a+b)³(a⁵+b⁵)+5ab(a+b)²(a⁴+b⁴)+15a²b²(a+b)(a³+b³)+35a³b³(a²+b²)+70a⁴b⁴=(a+b)^8.$$</span>
I'm trying to solve it by a sort of "inspection", but I haven't made it yet. Of course I could try to expand the left-hand polynomial and come to a more recognizable form of <span class="math-container">$(a+b)^8$</span>, but of course that would be the hard way (assuming that there is an easy one).</p>
<p>As an example of why I am talking of "inspection" I can state a similar problem:</p>
<p>Show that <span class="math-container">$$(x+\frac{5}{2}a)⁴-10a(x+\frac{5}{2}a)³+35a²(x+\frac{5}{2}a)²-50a³(x+\frac{5}{2}a)+24a⁴=(x²-\frac{1}{4}a²)(x²-\frac{9}{4}a²).$$</span>
Here by "inspection" we can deduce that the left-hand side of the identity is equivalent to <span class="math-container">$$[(x+\frac{5}{2}a)-a][(x+\frac{5}{2}a)-2a][(x+\frac{5}{2}a)-3a][(x+\frac{5}{2}a)-4a]$$</span> and then after a few steps come to the the desire result.</p>
<p>I would appreciate any help you could give me.</p>
| Fawkes4494d3 | 260,674 | <p><span class="math-container">$$(a+b)^3(a^5+b^5)+5ab(a+b)^2(a^4+b^4)+15a^2b^2(a+b)(a^3+b^3)+35a^3b^3(a^2+b^2)+70a^4b^4=(a+b)^8$$</span></p>
<p>Note that <span class="math-container">$a+b|(a+b)^3$</span>, so except the last two terms in the LHS, every term is divisible by <span class="math-container">$(a+b)^2$</span>, in fact you can take <span class="math-container">$35a^3b^3$</span> common from the last two terms to have <span class="math-container">$$35a³b³(a²+b²)+70a⁴b⁴=35a^3b^3(a+b)^2$$</span> so that the LHS becomes</p>
<p><span class="math-container">$$(a+b)^2\times((a+b)(a^5+b^5)+5ab(a^4+b^4)+15a^2b^2(a^2-ab+b^2)+35a^3b^3)$$</span></p>
<p>and if you leave out <span class="math-container">$(a+b)^2$</span> from this now, the multiplying and expanding becomes easier, you can verify that it is indeed <span class="math-container">$(a+b)^6$</span></p>
|
3,850,320 | <p>If a graph is Eulerian (i.e. has an Eulerian tour), then do we immediately assume for it to be connected?</p>
<p>The reason I ask is because I came across this question:</p>
<p><a href="https://math.stackexchange.com/questions/1689726/graph-and-its-line-graph-that-both-contain-eulerian-circuits">Graph and its line Graph that both contain Eulerian circuits</a></p>
<p>And the solution seems to assume that the graph is connected, before using the result that a connected graph is Eulerian if and only if every vertex has even degree.</p>
| Brian M. Scott | 12,042 | <p>A graph <span class="math-container">$G$</span> with an Euler circuit need not be connected, but the subgraph induced by the vertices that are on the Euler circuit must be a connected component of <span class="math-container">$G$</span>, and any other components must be isolated vertices. In the question to which you linked it doesn’t actually matter whether <span class="math-container">$G$</span> is connected: even if it has some isolated vertices, its line graph will be derived completely from the component with the Euler circuit and will therefore be connected.</p>
|
148,160 | <p>While writing a response to a <a href="https://mathematica.stackexchange.com/q/147679/34008">certain MSE question</a> I made a function that tabulates code and comments. (See the definition below.) </p>
<p>Here is an example:</p>
<pre><code>code = "
FoldList[(* reduction function *)
Plus,(* function to apply repeatedly *)
0,(* initial value *)
{1,2,3,3,100}(* arguments in repeated computations *)]";
GridOfCodeAndComments[
code,
"GridFunction" -> (Panel@Grid[#, Alignment -> Left] &)]
</code></pre>
<p><a href="https://i.stack.imgur.com/1TLaR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1TLaR.png" alt="enter image description here"></a></p>
<p>I have several problems with the implementation of <code>GridOfCodeAndComments</code>, the main one being that I have to give a string to the function instead of (commented) code. </p>
<p>For example, I would like to be able to write the tabulate code directly to <code>GridOfCodeAndComments</code>:</p>
<pre><code>GridOfCodeAndComments[
FoldList[(* reduction function *)
Plus,(* function to apply repeatedly *)
0,(* initial value *)
{1, 2, 3, 3, 100}(* arguments in repeated computations *)],
"GridFunction" -> (Panel@Grid[#, Alignment -> Left] &)]
</code></pre>
<p>How can this be done?
Any suggestions would be appreciated.</p>
<p>Another, minor problem in <code>GridOfCodeAndComments</code> is that the pattern for matching comments, <code>comPat</code>, is somewhat weak. How can it be improved?</p>
<h2>Definition</h2>
<pre><code>ClearAll[GridOfCodeAndComments]
Options[GridOfCodeAndComments] = {"GridFunction" -> (Grid[#, Alignment -> Left] &)};
GridOfCodeAndComments[code_String, opts : OptionsPattern[]] :=
Block[{grData, codeLines, commentLines, comPat, gridFunc},
gridFunc = OptionValue["GridFunction"];
If[TrueQ[gridFunc === Automatic], gridFunc = (Grid[#, Alignment -> Left] &)];
(* Split the code into lines *)
codeLines = StringSplit[code, "\n"];
(* Split each line into a {code, comment} pair *)
comPat = ("(*" ~~ (Except["*"] ..) ~~ "*)");
grData =
Map[
If[StringFreeQ[#, "(*"], {#, ""},
StringCases[#, (x__ ~~ y : (comPat) ~~ z___) :> {x <> z, y}][[1]]
] &, codeLines];
(* Style the code and comments *)
grData[[All, 1]] = Map[Style[#, "Input"] &, grData[[All, 1]]];
grData[[All, 2]] =
Map[Style[#, "CommentStyle" /. Options[$FrontEnd, AutoStyleOptions][[1, 2]]] &, grData[[All, 2]]];
(* Show result *)
gridFunc[grData]
];
</code></pre>
| CElliott | 40,812 | <p>This is a problem in design, and the chief difficulty in design is understanding the problem. Suppose you were charged with automating an algorithm in some complex subject area, say finite automata, so your organization could have fairly low-level workers give a it set of inputs and return a nicely formatted correct answer. So, your first action should be to write a set of requirements, preferably with a small set of input data, and a picture or diagram of what the output should look like. Define success.</p>
<p>Next, suppose you are given the finite automata algorithm to use or you find one in a text. You should also find a worked example, a problem at the end of the chapter, or make up a simple set of input data. The algorithm will be a few lines of English text, a few lines of (Boolean) algebra, a few lines of text, etc., etc., .... </p>
<p>For your problem, you should devise a written algorithm or recipe in a few lines of English text to go from simple input data to the desired output.</p>
<p>Now, on one of more sheets of paper, go thru the algorithm one line or sentence at a time to find an answer to your test problem. Written language is VERY imprecise, and it may hours or days before you really understand the algorithm author's (or your own) intent. Do this for several test problems until you really, really understand what the author means or what your problems really are. This can take a long time.</p>
<p>Next, translate your understanding of the algorithm to computer code. In Mathematica, you probably want to use the Module construct (Module[{constants, variables}, instructions, answer]), but you can also put all the initialization steps, constants, variables, instructions, answer in a single cell and just evaluate and re-evaluate that cell.</p>
<p>In these situations, I almost always use the Catch/Throw construct to incrementally arrive at the correct answer:</p>
<pre><code>Catch[Module[{constants, variables},
one or more lines of code
print intermediate result
Throw[intermediate result]
] (* End Module *)
</code></pre>
<p>Compare the intermediate result with your worked example.</p>
<pre><code>Repeat
Write a few lines of code
Compare result with example
until result is answer to problem
</code></pre>
<p>There is nothing worse than computer code the yields the wrong answer. The literature is full of horror stories of people who solved complex problems with Excel, only to find that when the algorithm was used in production, it cost the company thousands, and occasionally millions, of dollars to make it right with the customer. Hence, in Computer Science, Directive 0 is, "If it is not tested, it does not work."</p>
<p>Derive one or more new sets of data to test the solution to the problem. What happens when the inputs become really big, really small, and some big and some small?</p>
|
148,160 | <p>While writing a response to a <a href="https://mathematica.stackexchange.com/q/147679/34008">certain MSE question</a> I made a function that tabulates code and comments. (See the definition below.) </p>
<p>Here is an example:</p>
<pre><code>code = "
FoldList[(* reduction function *)
Plus,(* function to apply repeatedly *)
0,(* initial value *)
{1,2,3,3,100}(* arguments in repeated computations *)]";
GridOfCodeAndComments[
code,
"GridFunction" -> (Panel@Grid[#, Alignment -> Left] &)]
</code></pre>
<p><a href="https://i.stack.imgur.com/1TLaR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1TLaR.png" alt="enter image description here"></a></p>
<p>I have several problems with the implementation of <code>GridOfCodeAndComments</code>, the main one being that I have to give a string to the function instead of (commented) code. </p>
<p>For example, I would like to be able to write the tabulate code directly to <code>GridOfCodeAndComments</code>:</p>
<pre><code>GridOfCodeAndComments[
FoldList[(* reduction function *)
Plus,(* function to apply repeatedly *)
0,(* initial value *)
{1, 2, 3, 3, 100}(* arguments in repeated computations *)],
"GridFunction" -> (Panel@Grid[#, Alignment -> Left] &)]
</code></pre>
<p>How can this be done?
Any suggestions would be appreciated.</p>
<p>Another, minor problem in <code>GridOfCodeAndComments</code> is that the pattern for matching comments, <code>comPat</code>, is somewhat weak. How can it be improved?</p>
<h2>Definition</h2>
<pre><code>ClearAll[GridOfCodeAndComments]
Options[GridOfCodeAndComments] = {"GridFunction" -> (Grid[#, Alignment -> Left] &)};
GridOfCodeAndComments[code_String, opts : OptionsPattern[]] :=
Block[{grData, codeLines, commentLines, comPat, gridFunc},
gridFunc = OptionValue["GridFunction"];
If[TrueQ[gridFunc === Automatic], gridFunc = (Grid[#, Alignment -> Left] &)];
(* Split the code into lines *)
codeLines = StringSplit[code, "\n"];
(* Split each line into a {code, comment} pair *)
comPat = ("(*" ~~ (Except["*"] ..) ~~ "*)");
grData =
Map[
If[StringFreeQ[#, "(*"], {#, ""},
StringCases[#, (x__ ~~ y : (comPat) ~~ z___) :> {x <> z, y}][[1]]
] &, codeLines];
(* Style the code and comments *)
grData[[All, 1]] = Map[Style[#, "Input"] &, grData[[All, 1]]];
grData[[All, 2]] =
Map[Style[#, "CommentStyle" /. Options[$FrontEnd, AutoStyleOptions][[1, 2]]] &, grData[[All, 2]]];
(* Show result *)
gridFunc[grData]
];
</code></pre>
| Anton Antonov | 34,008 | <p>This answer is for a less general question:</p>
<ul>
<li><strong><em>How to improve the creation of tables of code and comments for monadic pipelines?</em></strong></li>
</ul>
<p>As I mentioned in the formulation of the original question post, I am interested in making tables of code and comments in order <a href="https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf" rel="nofollow noreferrer">to explain monadic programming</a>. So, it occurred to me at some point that a special monad can be used to make those tables for monadic pipelines.</p>
<p>(To be clear, the problem gets simplified if we want to build code-comment grids for monad pipelines only.)</p>
<p>The resulting <a href="https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m" rel="nofollow noreferrer"><code>TraceMonad</code> code</a>
is fairly simple, and demonstrates well the "programming semicolon" view of the binding operator in monadic programming.</p>
<p>I would say in this case the advice "eat your own dog food" is very useful -- it brings a nice solution (although a specialized one.)</p>
<p>In the example below note that :</p>
<ol>
<li><p>the tracing is initiated by just using <code>TraceMonadUnit</code>;</p></li>
<li><p>pipeline functions (actual code) and comments are interleaved;</p></li>
<li><p>putting a comment string after a pipeline function is optional.</p></li>
</ol>
<h3>Example</h3>
<p>The example below has sparse explanations, but the <a href="https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m" rel="nofollow noreferrer"><code>TraceMonad</code> file</a> has fairly detailed ones. </p>
<p>Load packages:</p>
<pre><code>Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]
</code></pre>
<p>Generate Maybe monad code for "Maybe":</p>
<pre><code>GenerateMaybeMonadCode["Maybe"]
GenerateMaybeMonadSpecialCode["Maybe"]
</code></pre>
<p>Make up data:</p>
<pre><code>data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
</code></pre>
<p>Execute a monadic pipeline and generate a table of code and comments:</p>
<pre><code>TraceMonadUnit[MaybeUnit[data]]⟹"(* lift data into the monad *)"⟹
MaybeFilter[# > 0.3 &] ⟹"(* filter current value *)"⟹
MaybeEcho ⟹"(* display current value *)"⟹
MaybeOption[(Maybe@Map[If[# < 0.4, None, #] &, #] &)]⟹"(* map values that are too small to None *)"⟹
MaybeEcho ⟹
TraceMonadEchoGrid[];
</code></pre>
<p><a href="https://i.stack.imgur.com/AZKB6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AZKB6.png" alt="enter image description here"></a></p>
|
148,160 | <p>While writing a response to a <a href="https://mathematica.stackexchange.com/q/147679/34008">certain MSE question</a> I made a function that tabulates code and comments. (See the definition below.) </p>
<p>Here is an example:</p>
<pre><code>code = "
FoldList[(* reduction function *)
Plus,(* function to apply repeatedly *)
0,(* initial value *)
{1,2,3,3,100}(* arguments in repeated computations *)]";
GridOfCodeAndComments[
code,
"GridFunction" -> (Panel@Grid[#, Alignment -> Left] &)]
</code></pre>
<p><a href="https://i.stack.imgur.com/1TLaR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1TLaR.png" alt="enter image description here"></a></p>
<p>I have several problems with the implementation of <code>GridOfCodeAndComments</code>, the main one being that I have to give a string to the function instead of (commented) code. </p>
<p>For example, I would like to be able to write the tabulate code directly to <code>GridOfCodeAndComments</code>:</p>
<pre><code>GridOfCodeAndComments[
FoldList[(* reduction function *)
Plus,(* function to apply repeatedly *)
0,(* initial value *)
{1, 2, 3, 3, 100}(* arguments in repeated computations *)],
"GridFunction" -> (Panel@Grid[#, Alignment -> Left] &)]
</code></pre>
<p>How can this be done?
Any suggestions would be appreciated.</p>
<p>Another, minor problem in <code>GridOfCodeAndComments</code> is that the pattern for matching comments, <code>comPat</code>, is somewhat weak. How can it be improved?</p>
<h2>Definition</h2>
<pre><code>ClearAll[GridOfCodeAndComments]
Options[GridOfCodeAndComments] = {"GridFunction" -> (Grid[#, Alignment -> Left] &)};
GridOfCodeAndComments[code_String, opts : OptionsPattern[]] :=
Block[{grData, codeLines, commentLines, comPat, gridFunc},
gridFunc = OptionValue["GridFunction"];
If[TrueQ[gridFunc === Automatic], gridFunc = (Grid[#, Alignment -> Left] &)];
(* Split the code into lines *)
codeLines = StringSplit[code, "\n"];
(* Split each line into a {code, comment} pair *)
comPat = ("(*" ~~ (Except["*"] ..) ~~ "*)");
grData =
Map[
If[StringFreeQ[#, "(*"], {#, ""},
StringCases[#, (x__ ~~ y : (comPat) ~~ z___) :> {x <> z, y}][[1]]
] &, codeLines];
(* Style the code and comments *)
grData[[All, 1]] = Map[Style[#, "Input"] &, grData[[All, 1]]];
grData[[All, 2]] =
Map[Style[#, "CommentStyle" /. Options[$FrontEnd, AutoStyleOptions][[1, 2]]] &, grData[[All, 2]]];
(* Show result *)
gridFunc[grData]
];
</code></pre>
| b3m2a1 | 38,205 | <p>Here's another possibility. Since your problem is fundamentally a problem of the comments being stripped, we can define an invisible wrapper <code>Commented</code> that evaluates away to nothing when operated on, and formats like a comment.</p>
<p>Here's a possible imp.</p>
<p>First make the formatting right:</p>
<pre><code>Format[Commented[e_, c_]] :=
RawBoxes@
TemplateBox[
{
ToBoxes[Unevaluated@e],
ToBoxes[c]
},
"CommentedCode",
DisplayFunction ->
Function[
RowBox[{#, " ",
TemplateBox[{#2}, "Comment",
DisplayFunction ->
Function[
StyleBox[RowBox[{"(*", #, "*)"}],
ShowStringCharacters -> False]
]
]
}]],
InterpretationFunction ->
Function[RowBox[{"Commented", "[", #, ",", #2, "]"}]]
]
In[15]:= Commented[a, "test symbol"]
Out[15]= Commented[a, "test symbol"]
</code></pre>
<p>But if we look at its format form (i.e. copy it with Shift-Control-C):</p>
<pre><code>a (*test symbol*)
</code></pre>
<p>Then make it invisible to evaluation:</p>
<pre><code>Commented /: (h : Except[Hold | HoldForm])[a___, Commented[expr_, _],
b___] := h[a, expr, b];
1 + Commented[a, "test symbol"]
1 + a
</code></pre>
<p>Then you can define a function that will find all <code>Commented</code> annotations, like so:</p>
<pre><code>extractComments[e_] :=
Cases[HoldComplete[e],
Verbatim[Commented][a_,
b_] :> (HoldComplete[a] -> b), \[Infinity]];
extractComments~SetAttributes~HoldAllComplete
</code></pre>
<p>And here's a chunk of nice formatted code to work from:</p>
<pre><code>chunk = HoldForm@
Commented[
Table[
a~Commented~"Return the int",
{a, 1, 10}
],
"Create a list of ints"
]
Table[a (*Return the int*),{a,1,10}] (*Create a list of ints*)
</code></pre>
<p>Then:</p>
<pre><code>With[{c = chunk}, extractComments[c]]
{HoldComplete[a] -> "Return the int",
HoldComplete[Table[Commented[a, "Return the int"], {a, 1, 10}]] ->
"Create a list of ints"}
</code></pre>
<p>You can start to work with (an adapted form of) this data structure now, potentially more easily than before</p>
|
89,845 | <p>first,I think we can avoid set theory to bulid the first order logic , by the operation of the finite string.but I have The following questions:</p>
<p>How does "meta-logic" work. I don't really know this stuff yet, but from what I can see right now, meta-logic proves things about formal languages and logics in general. But does it use some logic to do so? Like if I want to prove that two formal languages are equivalent in some respect, aren't I presupposing a "background" formal language? And won't my choice of a "background" (meta) language affect what I can and can't demonstrate? For example, what logic was Godel using when he proved his famous theorems? Was it a bivalent one? A three valued logic? etc</p>
<p>In short,I'm still not sure how reasoning about all possible formal languages work. For example, suppose I say something of the form "for all formal theories, F, if F has property X, then F must have property Y". If I wanted to prove something like that, how does such very general reasoning work? What I mean is that in such a proof, what kind of logic would be employed (for example, would it be a two valued logic?), and does the choice of logic affect the outcome? Do logicians agree on some kind of meta-meta logic, which they use to reason about absolutely everything? Or do they just choose their favorite one?</p>
<p>if metalogic is just predicate logic,It seems circular to me! we build the theory of predicate logic by using predicate logic?For example, in proving some theorem in the object language we seem to assume that it is already correct (in the metalanguage). Or defining some connective in the object language, we use that connective in the metalanguage to do so. It's like they're saying "Alright guys! We are going to prove a bunch of stuff about logic! Oh, by the way, you have to take all this stuff we are about to prove for granted, but don't worry, that's just the "metalanguage"." Something about this seems wrong to me. Maybe I have misunderstood?</p>
| Mauro ALLEGRANZA | 42,676 | <p>The issue of language/metalanguage and logic/metalogic seems easy to grasp after a careful study on modern math log textbooks (like Shoenfiled) but it can have (for me) interesting "philosophical" aspects.
Please, take a look to Russell & Whithead Intro to their monumental book Principia Mathematica (written 100 years ago): it is a masterpiece of modern mathematics ... but they have the same difficulty with object/meta-logic (see their "explanation" of Modus Ponens).
Logic (and mathematics ?) are "language games" : so, we need language (as a tool) to speak about language (an human activity, i.e.an object of the world, to be studied) and we need logic (as a tool) to reason about logic (a mathematical object).
The issue (and I think is a big one) comes from the "foundational" aspect that logic receive in the framework of scientific activities. If "fundational" means : start from scratch and build every layer on top of the preceeding one ... I think that last century philosophical debate show us that it is NOT possible to start from scratch at all.</p>
|
89,845 | <p>first,I think we can avoid set theory to bulid the first order logic , by the operation of the finite string.but I have The following questions:</p>
<p>How does "meta-logic" work. I don't really know this stuff yet, but from what I can see right now, meta-logic proves things about formal languages and logics in general. But does it use some logic to do so? Like if I want to prove that two formal languages are equivalent in some respect, aren't I presupposing a "background" formal language? And won't my choice of a "background" (meta) language affect what I can and can't demonstrate? For example, what logic was Godel using when he proved his famous theorems? Was it a bivalent one? A three valued logic? etc</p>
<p>In short,I'm still not sure how reasoning about all possible formal languages work. For example, suppose I say something of the form "for all formal theories, F, if F has property X, then F must have property Y". If I wanted to prove something like that, how does such very general reasoning work? What I mean is that in such a proof, what kind of logic would be employed (for example, would it be a two valued logic?), and does the choice of logic affect the outcome? Do logicians agree on some kind of meta-meta logic, which they use to reason about absolutely everything? Or do they just choose their favorite one?</p>
<p>if metalogic is just predicate logic,It seems circular to me! we build the theory of predicate logic by using predicate logic?For example, in proving some theorem in the object language we seem to assume that it is already correct (in the metalanguage). Or defining some connective in the object language, we use that connective in the metalanguage to do so. It's like they're saying "Alright guys! We are going to prove a bunch of stuff about logic! Oh, by the way, you have to take all this stuff we are about to prove for granted, but don't worry, that's just the "metalanguage"." Something about this seems wrong to me. Maybe I have misunderstood?</p>
| Noam Zeilberger | 1,015 | <blockquote>
<p>If I want to prove that two formal languages are equivalent in some respect, aren't I presupposing a "background" formal language? </p>
</blockquote>
<p>Yes -- but the distinction between object language and meta language can be studied carefully. This is an important part of proof theory, as well as (in a more modern context) of the theory of programming languages. Let me quote from Olivier Danvy's entry on "Self-interpreter" in the appendix to Jean-Yves Girard's <a href="http://iml.univ-mrs.fr/~girard/0.pdf" rel="noreferrer">Locus Solum</a>:</p>
<blockquote>
<p>Overall, a computer system is constructed inductively as a (finite) tower of interpreters, from the micro-code all the way up to the graphical user interface. Compilers and partial evaluators were invented to collapse interpretive levels because too many levels make a computer system impracticably slow. The concept of meta levels therefore is forced on computer scientists: I cannot make my program work, but maybe the bug is in the compiler? Or is it in the compiler that compiled the compiler? Maybe the misbehaviour is due to a system upgrade? Do we need to reboot? and so on. Most of the time, this kind of conceptual regression is daunting even though it is rooted in the history of the system at hand, and thus necessarily finite.</p>
</blockquote>
<p>In this view (in contrast to the views expressed in some of the other answers), the meta language does not have any special status: it is distinguished from the object language by its <em>role</em> rather than by its character. In particular, a meta language $L_1$, used to interpret some object language $L_2$, may itself be the object language of some interpretation in $L_0$.</p>
<p>On the other hand, often both mathematicians and computer scientists are interested in keeping the meta language as minimalistic as possible, <em>simpler</em> than the object language in some sense. A paradigmatic example is <a href="https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof" rel="noreferrer">Gentzen's cut-elimination argument</a>, which proves the consistency of first-order Peano arithmetic (PA). If the meta language for this proof is taken to be ZFC, then the result seems vacuous, since ZFC already includes PA (indeed is much more complicated than PA). However, in fact only a very small logical fragment of ZFC is needed to formalize the proof, namely PRA + $\epsilon_0$ (primitive recursive arithmetic plus transfinite induction up to $\epsilon_0$). Although PRA + $\epsilon_0$ is not included in PA (so that Gentzen's theorem does not contradict Gödel's), neither is PA included in PRA + $\epsilon_0$, since the latter only allows (transfinite) induction on <em>quantifier-free</em> statements. This is what prevents Gentzen's argument from being "circular", and the sense in which it reduces a statement about the object language to a "simpler" meta language.</p>
|
293,921 | <p>The problem I am working on is:</p>
<p>An ATM personal identification number (PIN) consists of four digits, each a 0, 1, 2, . . . 8, or 9, in succession.</p>
<p>a.How many different possible PINs are there if there are no restrictions on the choice of digits?</p>
<p>b.According to a representative at the author’s local branch
of Chase Bank, there are in fact restrictions on the choice
of digits. The following choices are prohibited: (i) all four
digits identical (ii) sequences of consecutive ascending or
descending digits, such as 6543 (iii) any sequence start-ing with 19 (birth years are too easy to guess). So if one of the PINs in (a) is randomly selected, what is the prob-ability that it will be a legitimate PIN (that is, not be one of the prohibited sequences)?</p>
<p>c. Someone has stolen an ATM card and knows that the first
and last digits of the PIN are 8 and 1, respectively. He has
three tries before the card is retained by the ATM (but
does not realize that). So he randomly selects the $2nd$ and $3^{rd}$
digits for the first try, then randomly selects a different pair of digits for the second try, and yet another randomly selected pair of digits for the third try (the
individual knows about the restrictions described in (b)
so selects only from the legitimate possibilities). What is
the probability that the individual gains access to the
account?</p>
<p>d.Recalculate the probability in (c) if the first and last digits are 1 and 1, respectively. </p>
<h2>---------------------------------------------</h2>
<p>For part a): The total number of pins without restrictions is $10,000$</p>
<p>For part b): The number of pins in either ascending or descending order is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known, then the three other spots containing digits are already spoken for. The number of pins where each slot contains the same digit is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known there is only one option left to the rest of the slots. The number of pins that have their first and second slot occupied by 1 and 9, respectively, is $1 \cdot 1 \cdot 10 \cdot 10 \cdot$. So, if R is the set that contains these restricted pins, then $|R| = 130$; and if N is the set that contains the non-restricted ones, meaning R and N are complementary sets, then $|N| = 10,000 - 130$. <strong>Hence, the probability is then $P(N) = 9780/10000 = 0.9870.$ However, the answer is $0.9876$. What did I do wrong?</strong></p>
<p>For part c): The sample space, containing all of the outcomes of the experiment that will take place, is $|N|=9870$. When it says that the thief won't use the same pair of digits in each try, does that not allow him trying the pin 8 <strong>5 2</strong> 1 in one try and the pin 8 <strong>2 5</strong> 1 in another try?</p>
| Ben | 93,875 | <p>Just my two cents, </p>
<p>10^4 possibilities. </p>
<p>There are 14 ascending and descending groups of 4.</p>
<p>Keyspace 9,876</p>
<p>If the badguy knows two of the four spaces, he only has to guess through entropy 10^2.
(None of the restrictions meet up with the range 8xx1)</p>
<p>3 Tries in 100</p>
<p>:)</p>
|
293,921 | <p>The problem I am working on is:</p>
<p>An ATM personal identification number (PIN) consists of four digits, each a 0, 1, 2, . . . 8, or 9, in succession.</p>
<p>a.How many different possible PINs are there if there are no restrictions on the choice of digits?</p>
<p>b.According to a representative at the author’s local branch
of Chase Bank, there are in fact restrictions on the choice
of digits. The following choices are prohibited: (i) all four
digits identical (ii) sequences of consecutive ascending or
descending digits, such as 6543 (iii) any sequence start-ing with 19 (birth years are too easy to guess). So if one of the PINs in (a) is randomly selected, what is the prob-ability that it will be a legitimate PIN (that is, not be one of the prohibited sequences)?</p>
<p>c. Someone has stolen an ATM card and knows that the first
and last digits of the PIN are 8 and 1, respectively. He has
three tries before the card is retained by the ATM (but
does not realize that). So he randomly selects the $2nd$ and $3^{rd}$
digits for the first try, then randomly selects a different pair of digits for the second try, and yet another randomly selected pair of digits for the third try (the
individual knows about the restrictions described in (b)
so selects only from the legitimate possibilities). What is
the probability that the individual gains access to the
account?</p>
<p>d.Recalculate the probability in (c) if the first and last digits are 1 and 1, respectively. </p>
<h2>---------------------------------------------</h2>
<p>For part a): The total number of pins without restrictions is $10,000$</p>
<p>For part b): The number of pins in either ascending or descending order is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known, then the three other spots containing digits are already spoken for. The number of pins where each slot contains the same digit is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known there is only one option left to the rest of the slots. The number of pins that have their first and second slot occupied by 1 and 9, respectively, is $1 \cdot 1 \cdot 10 \cdot 10 \cdot$. So, if R is the set that contains these restricted pins, then $|R| = 130$; and if N is the set that contains the non-restricted ones, meaning R and N are complementary sets, then $|N| = 10,000 - 130$. <strong>Hence, the probability is then $P(N) = 9780/10000 = 0.9870.$ However, the answer is $0.9876$. What did I do wrong?</strong></p>
<p>For part c): The sample space, containing all of the outcomes of the experiment that will take place, is $|N|=9870$. When it says that the thief won't use the same pair of digits in each try, does that not allow him trying the pin 8 <strong>5 2</strong> 1 in one try and the pin 8 <strong>2 5</strong> 1 in another try?</p>
| Mr.Young | 140,361 | <p>For (c) he has 3 tries, and there are a total of 100 choices. There are 10 choices for the 2nd number and 10 choices for the 3rd number. Since the PIN begins with an 8 and ends with a 1, none of the restrictions apply.</p>
<ul>
<li>For his first try, there is a <span class="math-container">$1/100 = 0.0100$</span> probability of guessing the correct PIN.</li>
<li>For his second try, there is a <span class="math-container">$1/99 \approx 0.0101$</span> probability.</li>
<li>For his third try, there is a <span class="math-container">$1/98 \approx 0.0102$</span> probability.</li>
</ul>
<p>So the total probability is <span class="math-container">$1/100 + 1/99 + 1/98 \approx 0.0303$</span>.</p>
|
2,519,623 | <p>How do I calculate the side B of the triangle if I know the following:</p>
<p>Side $A = 15 \rm {cm}
;\beta = 12^{\circ}
;\gamma= 90^{\circ}
;\alpha = 78^{\circ}
$</p>
<p>Thank you.</p>
| Kyky | 423,726 | <p>We need to find the probability of not getting a ball above $17$ first. In the urn, there are $4$ balls that are equal or larger than $17$ ($17,18,19,20$). Since there are $20$ balls, there are $20-4=16$ number of balls that are below $17$ in the urn. That means there is a $\frac{16}{20}$ chance that the first ball is less than $17$, $\frac{16-1}{20-1}$ chance that the second ball is less than $17$ provided the first ball is below $17$, and $\frac{16-2}{20-2}$ chance that the third ball is below $17$ provided the first two are below $17$. This gives us:
$$\frac{16}{20}\cdot\frac{16-1}{20-1}\cdot\frac{16-2}{20-2}$$
$$=\frac{16}{20}\cdot\frac{15}{19}\cdot\frac{14}{18}$$
$$=\frac{3360}{6840}$$
$$=\frac{28}{57}$$
Now, $1-\frac{28}{57}$=$\frac{29}{57}$ which is roughly $0.50877$.</p>
|
1,437,979 | <p>The given equation is $\dfrac{d^2y}{dx^2}+y=f(x)$.
I know that the C.F. is $~a\sin{x}+b\cos{x}$ but i stuck on P.I.. For non-homogeneous eautions, the theorem stating the methods to find P.I.s are not helpful for me in this case. The answer given is $~y(x)=a\sin{x}+b\cos{x}+\int_{0}^{x}f(t)\sin{(x-t)}dt$. How can I do this?</p>
| mickep | 97,236 | <p>They have probably used the following general result (which I state without giving any conditions):</p>
<blockquote>
<p><strong>Theorem</strong> If $Y$ is a solution to the homogeneous differential equation
$$
y''(x)+y(x)=0
$$
with conditions $y(0)=0$ and $y'(0)=1$. Then the function
$$
u(x)=\int_0^x Y(x-t)f(t)\,dt
$$
is a solution to
$$
y''(x)+y(x)=f(x).
$$
Moreover, it satisfies $u(0)=0$ and $u'(0)=0$.</p>
</blockquote>
<p>This is typically shown by differentiating under the integral sign, and is a good exercise.</p>
<p>In your case $Y(x)=\sin x$ satisfies $Y''+Y=0$ and $Y(0)=0$ and $Y'(0)=1$.</p>
<p><strong>A final comment</strong>: The theorem above is more general. It holds for general linear differential equations, and not only those with $y''+y$ in the left-hand side.</p>
|
4,527,880 | <p>Suppose</p>
<p><span class="math-container">$$ R = \begin{bmatrix} A & B\\ C & D\end{bmatrix} $$</span></p>
<p>is a <span class="math-container">$2 \times 2$</span> block matrix of real numbers, where <span class="math-container">$A$</span> and <span class="math-container">$D$</span> are squared diagonal matrices.</p>
<p>Is it possible that the following four conditions hold simultaneously?</p>
<ol>
<li><p><span class="math-container">$R$</span> is invertible</p>
</li>
<li><p><span class="math-container">$D$</span> is nonsingular</p>
</li>
<li><p>the Schur complement of <span class="math-container">$D$</span>, <span class="math-container">$A-BD^{-1}C$</span> is singular.</p>
</li>
<li><p><span class="math-container">$A$</span> is singular.</p>
</li>
</ol>
<p>If so, could you please provide a way to find the inverse <span class="math-container">$R$</span> in terms of the partitions of <span class="math-container">$R$</span>?</p>
| aitzkora | 860,148 | <p>The previous answer is off course correct and give a nice solution to your question. However there are some problems in the question. What is THE orthonormal basis of <span class="math-container">$\mathbb{R}^d$</span> ? There are lots of orthonormal basis in <span class="math-container">$\mathbb{R}^d$</span>, but I could assume you mean THE canonical basis of <span class="math-container">$\mathbb{R}^d$</span> which is also orthonormal for the classical dot product <span class="math-container">$x \cdot y = \sum_{i=1}^d x_i y_i $</span> ? If it is the case, may be your linear map L associates to <span class="math-container">$v^i$</span> the vector <span class="math-container">$e^i$</span> defined such as <span class="math-container">$e^i_j = \delta_{ij}$</span> ? Choosing for instance the euclidian <span class="math-container">$\|\cdot\|_2$</span> norm for vector then <span class="math-container">$\|Lx\|^2_2 = \sum_{i=1}^d (Lx)_i^2$</span> but
<span class="math-container">$$
Lx = L \left( \sum_{k=1}^d x_k v^k \right) = \sum_{k=1}^d x_k (L v^k) = \sum_{k=1}^d x_k e^k
$$</span>
But <span class="math-container">$\{e^k\}_{k=1}^d$</span> forms an orthogonal basis of <span class="math-container">$\mathbb{R}^d$</span> (THE canonical basis), thus
<span class="math-container">$ (Lx)_i = Lx \cdot e^i = \sum_{k=1}^d x_k e^k \cdot e^i = x_i $</span> since <span class="math-container">$e^i \cdot e^k = \delta_{ij} $</span>
Therefore, <span class="math-container">$(Lx)^2 = \sum_{i=1}^d x_i^2$</span> and you can majorate trivially the operator norm by one.</p>
|
67,124 | <p>Let $S$ be a "rich enough" theory such as Peano arithmetic or ZFC ; assume that we have
a complete formalization of the theory of $S$ so that we may talk about Godel numbers and
the length of a proof.</p>
<p>Godel's sentence is constructed so that it says "I am not provable from S". Now let $n$
be a fixed integer, and consider a sentence $\phi_n$ formed likewise that says "I am not provable in at most $n$ steps from S". Then $\phi_n$ is a true statement, and if $\phi_n$
has a proof from $S$ this proof has length at least $n$.
What is not clear is whether $\phi_n$ is provable from $S$. Does the answer to that question depend on the formalization we initially choose ?</p>
| Jakub Konieczny | 10,674 | <p>I suppose that $\phi_n$ is always provable, unless I am making some basic mistake. Given a string of length at most $n$, you can determine if it happens to be a proof for $\phi_n$. Now, just brute-seach <em>all</em> possible strings of length at most $n$ to see that none of them is a proof of $\phi_n$ (none can be, since $\phi_n$ is true), and you have proved $\phi_n$.</p>
|
67,124 | <p>Let $S$ be a "rich enough" theory such as Peano arithmetic or ZFC ; assume that we have
a complete formalization of the theory of $S$ so that we may talk about Godel numbers and
the length of a proof.</p>
<p>Godel's sentence is constructed so that it says "I am not provable from S". Now let $n$
be a fixed integer, and consider a sentence $\phi_n$ formed likewise that says "I am not provable in at most $n$ steps from S". Then $\phi_n$ is a true statement, and if $\phi_n$
has a proof from $S$ this proof has length at least $n$.
What is not clear is whether $\phi_n$ is provable from $S$. Does the answer to that question depend on the formalization we initially choose ?</p>
| hmakholm left over Monica | 14,366 | <p>Every true statement of the form "Such-and-such $\phi$ is not provable by a proof that contains at most $n$ symbols" is in fact provable -- the proof can consist of simply listing all strings of $n$ symbols or less and noting that neither of them is a valid proof of $\phi$.</p>
<p>I am fairly sure that the same holds for "at most $n$ steps" instead of "at most $n$ symbols", in any reasonable deductive system (even though the most straightforward way to represent a single "step" can often be arbitrarily long). But I can't think of a simple generic argument for this right away.</p>
|
2,353,193 | <p>I've recently been learning some homological algebra, mainly out of Northcott and some other sources, and I'm having trouble with the notion of projective dimension. In particular, I have a question (not from Northcott) that says</p>
<blockquote>
<p>Let $R = k[x,y]$ for a field $k$ and $M$ a finitely generated $R$-module. Then $M$ has projective dimension $2$ if and only if $\text{Hom}_R(k,M) \neq 0$, where we consider $k$ as an $R$-module with the ideal $\mathfrak m=(x,y)$ acting as $0$ on $k$ (i.e. $k = R/\mathfrak m$).</p>
</blockquote>
<p>I have attempted the problem but I don't see any of linking the notion of projective dimension to the Hom-set. What I have so far:</p>
<p>We have a projective resolution $$0\rightarrow P_2\rightarrow P_1\rightarrow P_0\rightarrow M\rightarrow 0$$ of $M$ and so we have a long exact sequence $$0\rightarrow \text{Hom}(k,P_2)\rightarrow \text{Hom}(k,P_1)\rightarrow \text{Hom}(k,P_0)\rightarrow \text{Hom}(k,M)\rightarrow \text{Ext}^1(k,P_2)\rightarrow \dots$$</p>
<p>We also have the exact sequence $$0\rightarrow \mathfrak m\rightarrow R\rightarrow k\rightarrow 0$$ which gives rise to the long exact sequence $$0\rightarrow \text{Hom}(k,M)\rightarrow \text{Hom}(R,M)\rightarrow \text{Hom}(\mathfrak m, M)\rightarrow \text{Ext}^1(k,M)\rightarrow \text{Ext}^1(R,M)\dots$$</p>
<p>Of these two long exact sequences I think the second one is more useful because we don't know anything about the $P_i$'s from the first one. Also $\text{Ext}^1(R,M) = 0$ since $R$ is projective, so we have an exact sequence with just four nonzero terms if we ignore everything past that.</p>
<p>However I have no idea how to include the projective resolution of $M$ which I imagine is necessary since the projective dimension of $M$ is a hypothesis. Also not sure how to use the finitely generated assumption.</p>
<p>So, I'd like a hint or two to proving this particular claim, and also if possible some general tips on proving things about projective dimension and using long exact sequences in general.</p>
| MooS | 211,913 | <p>One direction is <strong>false</strong>. Let me investigate this problem:</p>
<p>Over $k[x,y]$ free and projective is the same for finitely generated modules by Quillen-Suslin.</p>
<p>We have an exact sequence $$0 \to C \to F \to M \to 0,$$ where $F$ is free and $C$ is free if and only if the projective dimension of $M$ is strictly smaller than two.</p>
<p>Note that $F$ has $\mathfrak m$-depth two because it is free, hence $\operatorname{Hom}(k,F) = \operatorname{Ext}^1(k,F)=0$. The long exact sequence yields $\operatorname{Hom}(k,M) \cong \operatorname{Ext}^1(k,C)$.</p>
<p>If the projective dimension of $M$ is $<2$, then $C$ is free and the RHS is zero. This direction works fine.</p>
<p>If the projective dimension of $M$ is $2$, then $C$ is not free (but it is torsion-free as a submodule of $F$). So we would have to show that this implies $\operatorname{Ext}^1(k,C) \neq 0$. This is due to Auslander-Buchsbaum in the local case but in our non-local case this is false.</p>
<p>Choose $C$, s.t. $C$ is free at $\mathfrak m$ but not free at another maximal ideal of $R$. For instance let $C$ be the maximal ideal $\mathfrak n=(x,y-1)$. Then $C_\mathfrak m = R_\mathfrak m$ is free , but $C$ is not free. We can compute</p>
<p>$$\operatorname{Ext}^1_R(R/\mathfrak m,C)_\mathfrak m = \operatorname{Ext}^1_{R_\mathfrak m}(R/\mathfrak m,C_\mathfrak m) = 0,$$
since $C_\mathfrak m$ is free.
And for all other maximal ideals we have $(R/\mathfrak m)_{\mathfrak q} = 0$, so clearly $\operatorname{Ext}^1_R(R/\mathfrak m,C)_\mathfrak q =0$. Thus $\operatorname{Ext}^1_R(R/\mathfrak m,C)$ is zero, because it is locally zero.</p>
<p>This gives us a counterexample for $M$, namely $M = R/(x,y-1)$. It has projective dimension two (It cannot have projective dimension one, because its localization at $\mathfrak n$ has projective dimension two), but $\operatorname{Hom}_R(R/\mathfrak m,M)=0$.</p>
|
150,472 | <p>Let $h\in C_0([a,b])$ arbitrary, that is $h$ is continuous and vanishes on the boundary.
I want to show that
$\int\limits_a^b h(x)\sin(nx)dx \rightarrow 0$.</p>
<p>If $h\in C^1$, integration by parts immediately yields the claim, since $h'$ is continuous and thence bounded on the compact interval, using also the zero boundary condition.</p>
<p>However, I believe the statement is also true for all $h\in C_0([a,b])$. My idea is to approximate $h$ by functions $h_m \in C_0^1([a,b])$. Then for all $m$,</p>
<p>$$\begin{equation*}
\lim_{n \to \infty} \int h_m(x) \sin(nx) dx = 0.
\end{equation*}$$</p>
<p>$$\begin{align*}
\Rightarrow ~~~ \lim_{n \to \infty} \int h(x)\sin(nx) dx &= \lim_{n \to \infty} \int \lim_{m \to \infty} h_m(x)\sin(nx) dx\\ &= \lim_{m \to \infty}(\lim_{n \to \infty} \int h_m(x)\sin(nx) dx)\\ &= \lim 0 = 0.
\end{align*}$$</p>
<p>This is fine iff the second equality is. In fact, this is two different steps, as three limiting processes are involved. Hence the questions:</p>
<p>First, can I make sure that I can interchange the $m$-limit with the integral sign? (Can I assume that $h_m$ converges uniformly? Or use some sort of Dominated Convergence Theorem?)</p>
<p>And second, may I swap the $n$-limit for the $m$-limit? (The $n$-limit is in fact $C/n \to 0$)</p>
<p>I hope it's not too messy. Many thanks for any kind of help!</p>
| Davide Giraudo | 9,849 | <p>We can apply Stone-Weierstrass: polynomial are dense in $C_0([a,b])$ endowed with the supremum norm. We can also choose such a sequence vanishing at the boundary. Indeed, if $\{P_n\}$ is a sequence of polynomial converging uniformly to $h$, then $Q_n(x)=P_n(x)-P_n(a)-\frac{x-a}{b-a}(P_n(b)-P_n(a))$, we have $Q_n(a)=0=Q_n(b)$ and
$$\sup_{a\leq x\leq b}|Q_n(x)-h(x)|\leq \sup_{a\leq x\leq b}|P_n(x)-h(x)|+2|P_n(a)|+|P_n(b)|.
$$
Now, fix $\{P_m\}$ a sequence of polynomials such that $\sup_{a\leq x\leq b}|P_m(x)-h(x)|\leq \frac 1m$ and $P_m$ vanishes at the boundary. We have for a fixed $m$ that
\begin{align}\left|\int_a^bh(x)\sin(nx)dx\right|&\leq \int_a^b|h(x)-h_m(x)||\sin(nx)|dx+
\left|\int_a^bh_m(x)\sin(nx)dx\right|\\
&\leq \frac{b-a}m+\left|\int_a^bh_m(x)\sin(nx)dx\right|
\end{align}
hence by the $C^1$ case, for each $m$
$$\limsup_{n\to +\infty}\left|\int_a^bh
(x)\sin(nx)dx\right|\leq \frac{b-a}m$$
and we can conclude. </p>
|
3,746,597 | <p>My assumption would be</p>
<p><span class="math-container">$$\int_{-a}^a x\ dx=0$$</span></p>
<p>Am I on the right track here? Also, for indefinite integrals</p>
<p><span class="math-container">$$\int (f)x\ dx$$</span></p>
<p>would this be correct as well?</p>
<p><strong>Background</strong></p>
<p>My professor raised this question in his lecture and I provided the following</p>
<p><span class="math-container">\begin{align}\int_{-a}^{a}\left(x^3\right)dx&= 0\end{align}</span></p>
<p>and</p>
<p><span class="math-container">\begin{align}\int_{-a}^{a}\left(x^7\right)dx&= 0\end{align}</span></p>
<p>to support that odd degrees will always equal to zero. The professor stated my evaluations were correct, however, I couldn't use the fact that it works for two positive odd exponents to deduce conclusively that the result will hold for all positive odd exponents. Thus, my assumption is that</p>
<p><span class="math-container">$$\int_{-a}^a x\ dx=0$$</span></p>
<p>covers all non-negative integers <span class="math-container">$n$</span> simultaneously. Any help in this would be appreciated!</p>
| QC_QAOA | 364,346 | <p>We have</p>
<p><span class="math-container">$$\int_{-a}^ax^{2n+1}dx=\frac{1}{2n+2}x^{2n+2}\bigg\vert_{-a}^a=\frac{1}{2n+2}\left(a^{2n+2}-(-a)^{2n+2}\right)=\frac{a^{2n+2}}{2n+2}\left(1-(-1)^{2n+2}\right)$$</span></p>
<p>But <span class="math-container">$2n+2$</span> is always even. This implies <span class="math-container">$(-1)^{2n+2}=1$</span> which gives us</p>
<p><span class="math-container">$$\frac{a^{2n+2}}{2n+2}\left(1-(-1)^{2n+2}\right)=\frac{a^{2n+2}}{2n+2}\left(1-1\right)=0$$</span></p>
|
1,420,277 | <p>I have to solve this:</p>
<p>$$[(\nabla \times \nabla)\cdot \nabla](x^2 + y^2 + z^2)$$</p>
<p>But I am really drowning in the sand..</p>
<p>Can anybody help me please?</p>
| gt6989b | 16,192 | <p><strong>HINT</strong></p>
<p>Let $E$ denote the number of voters from Estrada and $A$ - from Arrayo. Then, the total is $A+E = 8600$. Translate the second sentence into an equation and solve them together.</p>
|
4,126,470 | <p><span class="math-container">$$\sum_{n=2}^{∞} \frac{1}{n\left(\left(\ln\left(n\right)\right)^3+\ln\left(n\right)\right)}$$</span></p>
<p>I know that there are several methods of finding the convergence of a series. The ratio test, the comparison test, the limit comparison test. There is also this theorem: If a series <span class="math-container">$\sum_{n=1}^{\infty}a_n$</span> of real numbers converges then <span class="math-container">$\lim_{n \to \infty}a_n = 0$</span>.</p>
<p><strong>So can I everytime just apply this theorem instead of using all the tests?</strong> For example in here,</p>
<p><span class="math-container">$\lim _{n\to \infty }\left(\frac{1}{n\left(\left(\ln\left(n\right)\right)^3+\ln\left(n\right)\right)}\right) = 0$</span> So I can just conclude that <span class="math-container">$\sum_{n=2}^{∞} \frac{1}{n\left(\left(\ln\left(n\right)\right)^3+\ln\left(n\right)\right)}$</span>convergences?</p>
<p>It seems to me that most of the time I can just get away with using all that comparison by using this theorem or am I getting the wrong idea?</p>
| DonAntonio | 31,254 | <p>Use for example <a href="https://en.wikipedia.org/wiki/Cauchy_condensation_test" rel="nofollow noreferrer">Cauchy's Condensation Test</a> for <span class="math-container">$\;a_n=\frac1{n\log^2n}\;$</span> after a first comparison (why can you? Check carefully the conditions to apply this test!):</p>
<p><span class="math-container">$$2^na_{2^n}=\frac{2^n}{2^n\log^22^n}=\frac1{n^2\log^22}\le\frac1{n^2}$$</span></p>
|
3,325,250 | <p>Here is the proof that every Hilbert space is refexive:</p>
<p>Let <span class="math-container">$\varphi\in\mathcal{H^{**}}$</span> be arbitrary. By Riesz, there is a unique <span class="math-container">$f_\varphi\in\mathcal{H^*}$</span> with </p>
<p><span class="math-container">$\varphi(f)=\langle\,f,f_\varphi\rangle$</span> for all <span class="math-container">$f \in\mathcal{H^*} $</span>. </p>
<p>Using the same notation and theorem, we have</p>
<p><span class="math-container">$\hat{y}_{f_\varphi}(f)= f(y_{f_\varphi})=\langle\,y_{f_\varphi},y_f\rangle=\langle\,f,f_\varphi\rangle=\varphi(f)$</span></p>
<p>This implies <span class="math-container">$\hat{y}_{f_\varphi}=\varphi$</span>, thus <span class="math-container">$\mathcal{H}$</span> reflexive.</p>
<p>I understood all the steps except for the last implication. Basically, we just showed that <span class="math-container">$2$</span> functionals from bi-dual space <span class="math-container">$\mathcal{H^{**}}$</span> are the same, why would it imply that <span class="math-container">$\mathcal{H}$</span> is reflexive? Any explanation would be highly appreciated!</p>
| Ben Grossmann | 81,360 | <p>Let <span class="math-container">$\Phi:\mathcal H \to \mathcal H^{**}$</span> denote the canonical injection, AKA the evaluation map (in the notation of the proof, <span class="math-container">$\Phi(x) = \hat x$</span>)t. We want to prove that <span class="math-container">$\Phi$</span> is surjective. In other words: we want to prove that for any <span class="math-container">$\varphi \in \mathcal H^{**}$</span>, there exists a <span class="math-container">$y \in \mathcal H$</span> such that <span class="math-container">$\Phi(y) = \varphi$</span>.</p>
<p>So, begin with any <span class="math-container">$\varphi$</span>. By the RRT, there exists a unique <span class="math-container">$f_{\varphi}$</span> such that for all <span class="math-container">$f \in \mathcal H^*$</span>, <span class="math-container">$\varphi(f) = \langle f, f_{\varphi}\rangle$</span>.</p>
<p>Note that this requires that requires an inner product on <span class="math-container">$\mathcal H^*$</span>. Recall how such an inner product is defined: RRT says that there exists a <span class="math-container">$y_f$</span> for every <span class="math-container">$f \in \mathcal H^*$</span> such that for <span class="math-container">$y \in \mathcal H$</span>, we have <span class="math-container">$f(y) = \langle y,y_f\rangle$</span>. With this established, we define
<span class="math-container">$$
\langle f,g \rangle := \langle y_f,y_g\rangle.
$$</span></p>
<p>We claim that <span class="math-container">$\Phi(y_{f_{\varphi}}) = \varphi$</span> (that is, <span class="math-container">$y_{f_{\varphi}}$</span> is "the <span class="math-container">$y$</span> that we're looking for"). Indeed, we note that for any <span class="math-container">$f \in \mathcal H^*$</span>, we have
<span class="math-container">$$
[\Phi(y_{f_{\varphi}})](f) = f(y_{f_{\varphi}}) = \langle y_{f_\varphi},y_f \rangle
= \langle f, f_{\varphi}\rangle = \varphi(f)
$$</span></p>
|
315,457 | <p>I am trying to evaluate $\cos(x)$ at the point $x=3$ with $7$ decimal places to be correct. There is no requirement to be the most efficient but only evaluate at this point.</p>
<p>Currently, I am thinking first write $x=\pi+x'$ where $x'=-0.14159265358979312$ and then use Taylor series $\cos(x)=\sum_{i=1}^n(-1)^n\frac{x^{2n}}{(2n)!}$ to decide the best $n$ and the fact the error bound $\frac{1}{(n+1)!}$ for $\cos(x)$ when $x\in[-1,1]$ to decide $n$. Using wolfram alpha I got $n=11$. Thus I need to use the first $11$ term of Taylor series of $\cos(x)$. Is this seems a reasonable approach?</p>
<p>If I am using some programming languages which don't contain $\pi$ as a constant, should I just define $\pi$ first and use the above method? Is there any other approach to this?</p>
<p>If I want to evaluate $\sin(\cos(x))$ at the point $x=3$, should I use above method to evaluate $\cos(x)$ first and then $\sin(\cos(x))$? Is there any other approach to this?</p>
| kingpin | 324,094 | <p>There is another simple criterion for the irreducibility of a matrix with nonnegative entries. Such an $n\times n$-matrix $A$ is irreducible if and only if all entries of
$$\sum\limits_{i=0}^{n}A^i$$
are greater than $0$.</p>
<p>Since I do not have a reference, I will briefly sketch a proof, using the definition that <em>$A$ is irreducible iff for all indices $i,j$ there is an exponent $e_{i,j}$, such that entry $[A^{e_{i,j}}]_{ij}$ is positive</em>
(where $[C]_{ij}$ denotes the entry at $i,j$ of a matrix $C$).</p>
<p>Let $B$ be the matrix obtained from $A$ by replacing all non zero entries by $1$.</p>
<ol>
<li><p>Show that $A$ is irreducible iff B is irreducible.</p></li>
<li><p>Show that $\sum\limits_{i=0}^{n}A^i$ has only positive entries iff this is true for $\sum\limits_{i=0}^{n}B^i$.</p></li>
<li><p>Let $G$ be the directed graph with vertices $\{1,2,\ldots,n\}$, where there is an edge from $i$ to $j$ iff $b_{ij}>0$.
Show, by induction on $m$, that the entry of $[B^m]_{ij}$ corresponds to the number of directed paths from $i$ to $j$.</p></li>
</ol>
<p>According to 3., for $m\in\mathbb{N}$ the number of directed paths from $i$ to $j$ of length at most $m$ is $\left[\sum\limits_{k=0}^mB^k\right]_{ij}$.
Now the claim follows form the following equivalences:
$$\begin{array}{rl}
&\text{$B$ is an irreducible matrix.}\\
\Leftrightarrow&\text{For all $i,j\in\{1,2,\ldots,n\}$, there is a directed path in $G$ from $i$ to $j$.}\\
\Leftrightarrow&\text{For all $i,j\in\{1,2,\ldots,n\}$, there is a directed path in $G$ from $i$ to $j$ of length at most $n$}\\
&\text{(note that this graph has exactly $n$ vertices).}\\
\Leftrightarrow&\text{For all $i,j\in\{1,2,\ldots,n\}$ holds $\left[\sum\limits_{k=0}^{n}B^k\right]_{ij}>0$.}
\end{array}$$</p>
|
300,105 | <p>I want to find the proof of the spectrum of the hypercube</p>
| Chris Godsil | 16,143 | <p>There is a proof here: <a href="http://www.cs.yale.edu/homes/spielman/eigs/lect12.ps" rel="nofollow">http://www.cs.yale.edu/homes/spielman/eigs/lect12.ps</a></p>
<p>Or you can look up eigenvalues of Cartesian products and then follow Marion's hint.</p>
|
300,105 | <p>I want to find the proof of the spectrum of the hypercube</p>
| achille hui | 59,379 | <p>This is not an independent answer but filling in what Mariano didn't bother to prove.</p>
<p>Start with Mariano's hint:
$$A_{n+1}=\begin{pmatrix}A_{n}&I_{2^{n-1}}\\I_{2^{n-1}}&A_{n}\end{pmatrix}$$
Let $\chi_n(\lambda) = \det(\lambda I_{2^{n}} - A_{n+1})$ be the characteristic polynomial of the $n$-dim hypercube. </p>
<p>Notice for any $2m \times 2m$ matrix $X(A)$ of the form $\begin{pmatrix}A & I_m\\I_m & A\end{pmatrix}$. When $A$ is invertible, we have:</p>
<p>$$\begin{pmatrix}A & I_m\\I_m & A\end{pmatrix}\begin{pmatrix}I_m&-A^{-1}\\0&I_m\end{pmatrix} = \begin{pmatrix}A&0\\I_m&A-A^{-1}\end{pmatrix}$$
This implies
$$\det X(A) = \det(A)\det(A - A^{-1}) = \det(A^2 - I_{2m}) = \det(A - I_{2m})\det(A + I_{2m})$$
Since both side of this identity are polynomials in entries of $A$, this identity is true even when $A$ is not invertible. </p>
<p>Apply this to $A_{n+1} - \lambda I_{2^n}$, we immediately obtain:</p>
<p>$$\chi_n(\lambda) = \chi_{n-1}(\lambda+1)\chi_{n-1}(\lambda-1)$$
So if the roots for the $(n-1)$-dim hypercube are n-1, n-3, ..., -(n-1)
with multiplicities $$\binom{n-1}{0}, \binom{n-1}{1}, \binom{n-1}{2}, \ldots$$
then the roots for the $n$-dim hypercube are n = (n-1)+1, n-2 = (n-3)+1 = (n-1)-1, ... , -n with multiplicities
$$1 = \binom{n}{0}, \binom{n}{1} = \binom{n-1}{0} + \binom{n-1}{1},
\binom{n}{2} = \binom{n-1}{1} + \binom{n-1}{2}, \ldots$$</p>
|
291,684 | <p>Linear ODE systems $x'=Ax$ are well understood. Suppose I have a quadratic ODE system where each component satisfies $x_i'=x^T A_i x$ for given matrix $A_i$. What resources, textbooks or papers, are there that study these systems thoroughly? My guess is that they aren't completely understood, but it would be good to know more about what has been done.</p>
| MzF | 389,000 | <p>For quadratic systems there is the famous paper by Larry Markus
"Quadratic Differential equations and non-associative algebras" in Contributions to the Theory of Nonlinear Oscillations Vol V (1960) pages 185 - 213.</p>
<p>The PhD thesis: "Quadratic differential Equations a Study in Nonlinear Systems Theory" by M. Frayman Univ. of Maryland, 1974.</p>
<p>Also, "Extensions of Linear Quadratic Control, Optimization and Matrix Theory" by David H. Jacobson 1977.</p>
<p>Also, "Bilinear Control Systems" by David Elliot, Springer, 2009.</p>
<p>Quadratic systems have very interesting properties and include some chaotic systems; Lorenz's butterfly effect is a quadratic system. Also, these systems can be "super unstable" or explosive in the sense that a solution can go to infinity in finite time; whereas in a linear system a solution that goes to infinity takes infinite time to get there. </p>
<p>All sorts of other good stuff. </p>
|
1,453,010 | <p>A certain biased coin is flipped until it shows heads for the first time. If the probability of getting heads on a given flip is $5/11$ and $X$ is a random variable corresponding to the number of flips it will take to get heads for the first time, the expected value of $X$ is:
$$E[x] = \sum_{x=1}^\infty{x\frac{5}{11}\frac{6}{11}^{x-1}}$$
I'm not sure how to find an exact value for $E[x]$. I tried thinking about it in terms of a summation of an infinite geometric series but I don't see how that formula can be applied. </p>
| Carlos H. Mendoza-Cardenas | 274,058 | <p>$X$ is a geometric random variable with parameter $p$. A way to compute its expected value is through the total expectation theorem:</p>
<p>\begin{align}
E[X] &= E[X\mid X=1]P(X=1) + E[X\mid X>1]P(X>1)\\
\end{align}</p>
<p>When you already know that $X=1$, its expected value is 1, therefore $E[X \mid X=1] = 1$.</p>
<p>When you know that $X>1$, i.e., $X = 2, 3, \ldots$, then you can imagine that you have a "new" random variable, $X-1$, with values $X-1 = 1,2,\dots$, and that is also geometric! This remarkable property of the geometric distribution is formally called the memorylessness property, and implies that $E[X-1 \mid X>1] = E[X]$. Finally,</p>
<p>\begin{align}
E[X] &= E[X\mid X=1]P(X=1) + E[X\mid X>1]P(X>1)\\
&= P(X=1) + E[(X-1)+1 \mid X>1]P(X>1)\\
&= p + (E[X-1 \mid X>1]+E[1 \mid X>1])(1-p)\\
&= p + (E[X]+1)(1-p)\\
\end{align}</p>
<p>This shows that $E[X] = \frac{1}{p}$. In your case $p=\frac{5}{11}$.</p>
<p>Here a great explanation of this approach (minute 24:54) from the MIT's professor John Tsitsiklis: <a href="https://youtu.be/-qCEoqpwjf4?list=PLUl4u3cNGP60A3XMwZ5sep719_nh95qOe" rel="nofollow">https://youtu.be/-qCEoqpwjf4?list=PLUl4u3cNGP60A3XMwZ5sep719_nh95qOe</a></p>
<p>Best.</p>
|
184,601 | <p>A user on the chat asked how could he make something that would cap when it gets a specific value like 20. Then the behavior would be as follows:</p>
<p>$f(...)=...$</p>
<p>$f(18)=18$</p>
<p>$f(19)=19$</p>
<p>$f(20)=20$</p>
<p>$f(21)=20$</p>
<p>$f(22)=20$</p>
<p>$f(...)=20$</p>
<p>He said he would like to perform it with a regular calculator. Is it possible to do this?</p>
| Community | -1 | <p>We can also get a bit (unnecessarily) fancier:
$$ f(x) = x + (20 - x) \int\limits_{-\infty}^{x-20} \delta(t)\ dt $$
where
$$ \int\limits_{-\infty}^{x-20} \delta(t)\ dt = \begin{cases} 0 & x < 20 \\ 1 & x \ge 20 \end{cases} $$
(See <a href="http://en.wikipedia.org/wiki/Unit_step_function" rel="nofollow">Heaviside step function</a>.)</p>
|
4,572,517 | <p>Given two 3x3 matrix:</p>
<p><span class="math-container">$$
V=
\begin{bmatrix}
1 & 0 & 9 \cr
6 & 4 & -18 \cr
-3 & 0 & 13 \cr
\end{bmatrix}\quad
W=
\begin{bmatrix}
13 & 9 & 3 \cr
-14 & -8 & 2 \cr
5 & 3 & -1 \cr
\end{bmatrix}
$$</span></p>
<p>Is there any way to predict that <span class="math-container">$ V * W = W * V $</span>
without actually calculating both multiplications</p>
| fusheng | 880,505 | <p>I think <span class="math-container">$\mathscr{dim}~V<\infty$</span> guarantees the existence of the polynomial <span class="math-container">$p$</span>.</p>
<p>Consider <span class="math-container">$V=\mathbb{R}^{N}$</span>,<span class="math-container">$ \forall ~x=(x_n)\in \mathbb{R}^N$</span>, we define <span class="math-container">$A(x)=(k_nx_n)$</span>.</p>
<p>Assume <span class="math-container">$0\ne f$</span> is a polynomial, then <span class="math-container">$f(A)(x)=(f(k_n)x_n)$</span>.</p>
<p>Let <span class="math-container">$v=(1,1,\cdots,1,\cdots)$</span>,<span class="math-container">$f(A)v=(f(k_1),f(k_2),\cdots,f(k_n),\cdots)=0$</span>.</p>
<p>So <span class="math-container">$f$</span> has infinite roots.</p>
<p>A contradictions</p>
|
2,893,568 | <p>I need some help finding the standard deviation using Chebyshev's theorem. Here's the problem:</p>
<blockquote>
<p>You have concluded that at least $77.66\%$ of the $3,075$ runners took between $60.5$ and $87.5$ minutes to complete the $10$ km race. What was the standard deviation of these $3,075$ runners?</p>
</blockquote>
<p>I set up the formula as follows:</p>
<p>$$.7766 = 1 - \frac{1}{k^2}$$</p>
<p>I got $k = 2.115721092$, which makes some sense because I know that a standard deviation of $2$ yields $75\%$, so I expected a slightly higher percentage $(77.66)$ to yield a slightly higher standard deviation.</p>
<p>Thanks for any hints.</p>
| Henry | 6,460 | <p>There is no upper or lower bound on the standard deviation (apart from $0$) given that information. Perhaps</p>
<ul>
<li>everybody finished in exactly $61$ minutes to give a standard deviation of $0$</li>
<li>$2389$ people finished in $61$ minutes and $686$ finished in $100000$ minutes (almost $10$ weeks) to give a standard deviation of over $41600$</li>
</ul>
<p>But if you knew that <em>no more than</em> $77.66\%$ finished the race in that time and you knew that the average time was $74$ minutes (halfway between $60.5$ and $87.5$ then you could state a lower bound for the standard deviation of about $\frac{87.5-74}{2.115721092} \approx 6.38$. To give an example close to this lower bound, consider</p>
<ul>
<li>$2387$ people finished in $74$ minutes, $344$ finished in $60.4$ minutes and $344$ finished in $87.6$ minutes to give a standard deviation of just over $6.43$ </li>
</ul>
|
2,090,790 | <p>We are given $g(x)=\frac{x \sin x}{x+1}$, and as I said we need to show it has no maxima in $(0,\infty)$.</p>
<p><strong>My attempt</strong>: assume there is some $x_0>0$ that yields a maxima. then for all $x$</p>
<p>$$-1+\frac{1}{x+1}\leq \frac{x \sin x}{x+1}\leq \frac{x_0 \sin x_0}{x_0+1}\leq 1-\frac{1}{x_0+1}$$
and we can find some $x$ for which this isn't satisfied (like $\frac{1}{2-\frac{1}{x_0+1}}$?). This feels very unnecessary (plus I assume things about $x_0$), but I don't know what good way there is...</p>
<p>Note that I saw the thread here showing $\sup_{x>0} g(x)=1$ but one solution is not on my level, and the other one doesn't actually show its the $\sup$, rather than some sort of "partial limit". Even so, I still don't know how to show that there is no $x$ such that $g(x)=1$, and I don't think it's needed here.</p>
<p>Any help is appreciated in advance!</p>
| A. Salguero-Alarcón | 405,514 | <p>Notice that, for $x_n=\frac{(4n+1)\pi}{2}n$, $n\in\mathbb N$, $\sin(x_n)=1$, so $f(x_n)=\frac{x_n}{x_n+1}$.</p>
<p>We also have $f(x)<1$ for all $x\in(0,+\infty)$.
The sequence $f(x_n)$ is strictly increasing and gets closer and closer to $1$, so that means $f$ cannot have a maxima.</p>
|
34,724 | <h3>Overview</h3>
<p>For integers n ≥ 1, let T(n) = {0,1,...,n}<sup>n</sup> and B(n)= {0,1}<sup>n</sup>. Note that |T(n)|=(n+1)<sup>n</sup> and |B(n)| = 2<sup>n</sup>.
A certain set S(n) ⊂ T(n), defined below, contains B(n). The question is about the growth rate of |S(n)|. Does it grow exponentially, like |B(n)|, so that |S(n)| ~ c<sup>n</sup> for some c, or does it grow superexponentially, so that c<sup>n</sup>/|S(n)| approaches 0 for all c> 0?</p>
<h3>Definition</h3>
<p>The set S(n) is defined as follows: an n-tuple t = (t<sub>1</sub>,t<sub>2</sub>,...,t<sub>n</sub>) ∈ T(n) is in S(n) if and only if t<sub>i+j</sub> ≤ j whenever 1 ≤ j< t<sub> i</sub>. For example, if t ∈ T(10) with t<sub> 4</sub>=5, t<sub> 5</sub> can be at most 1, t<sub> 6</sub> can be at most 2, , t<sub> 7</sub> can be at most 3, and t<sub> 8</sub> at most 4, but there is no restriction (at least not due to the value of t<sub> 4</sub>) on t<sub> 9</sub> or t<sub> 10</sub>; t<sub> 9</sub> and t<sub> 10</sub> can have any values in {1,...,10}.</p>
<h3>Alternate formulation (counting triangles)</h3>
<p>The elements of S(n) can be put into one-to-one correspondence with certain configurations of n right isosceles triangles, so that |S(n)| counts the number of such configurations. </p>
<p>For integers k>0 (size) and v≥0 (vertical position), let Δ <sub>k,v</sub> be the triangle with vertices (0,v), (k,k+v), and (k,v). (Δ<sub>0,v</sub> is the degenerate triangle with all three vertices at (0,v).)</p>
<p>Now associate with an n-tuple t = (t<sub>1</sub>,t<sub>2</sub>,...,t<sub>n</sub>) ∈ T(n) the set D<sub>t</sub> = $\lbrace\Delta_{t_k,k}:1\le k \le n\rbrace$. (That's "\lbrace\Delta_{t_k,k}:1\le k \le n\rbrace," if you can't read it.) The set D<sub> t</sub> contains n isosceles right triangles that extend to the right of the y-axis, one triangle at each of the points (0,k) for 1 ≤ k ≤ n.</p>
<p>The tuple t is in S(n) if and only if the triangles in D<sub> t</sub> have disjoint interiors. (This isn't hard to show, and if it is, I've probably made a mistake in my definitions, so let me know.) Thus |S(n)| counts the number of ways one can arrange n isosceles right triangles of various sizes (between size zero and size n) at n consecutive integer points on the y-axis so the triangle can extend to the right and up without overlapping. Triangles of the same size are indistiguishable for the purpose of counting the number of arrangements. (It may help to think of right isosceles pennants attached at an acute-angle corner to a flagpole in a stiff wind.)</p>
<h3>Question</h3>
<p>Does |S(n)| grow exponentially with n, or faster?</p>
<h3>Calculations</h3>
<p>If I’ve counted correctly, the first few terms of the sequence {|S(n)|} beginning with n=1 are 2, 8, 38, 184, 904, and 4384. This sequence (and some sequences resulting from minor variations of the problem) fails to match anything in the Online Encyclopedia of Integer Sequence.</p>
<p>Links to similar counting problems mentioned or solved in the literature would help. </p>
<p>Thanks!</p>
| Roland Bacher | 4,556 | <p>This is no answer but a description of an efficient way for computing the cardinality
of $S(n)$. I have to post it as an answer since it is too long for a comment.</p>
<p>Consider the subsets $U_a(n),C_a(n),L_a(n)$ of $S(n)$ defined as follows:
all coefficients of $U_a(n),C_a(n),L_a(n)$ are $\leq a$ and satisfy
the additional inequalities $t_i\leq i$ for $U,C$, respectively
$t_{n-i}\leq i+1$ for $C,L$.</p>
<p>The set $S(n)$ is then in bijection with the union of the trivial element $(0,0,\dots,0)$
with $\cup_{\mu=1}^n\cup_{k=1}^n L_\mu(k-1)\times U_{\mu-1}(n-k)$. Indeed, a non-trivial element $(t_1,\dots,t_n)\in S(n)$
with last index $k$ on which its coefficient takes the maximal
value $\mu=\max_i t_i$ gives rise to an element $(t_1,\dots,t_{k-1})$ of $L_\mu(k-1)$ and to an element $(t_{k+1},t_{k+2},\dots,t_n)$ of $U_{\mu-1}(n-k)$. These elements, together with the omitted $k-$th coordinate $t_k=\mu$, determine the initial vector $(t_1,\dots,t_n)$
uniquely.</p>
<p>One gets similar recursive decompositions of $U,C,L$ (standing for upper, central, lower)
giving rise to recurrence relations among the cardinalities of
$U,C,L$ which allow to compute their cardinalities in quadratic time.</p>
<p>By the way, here an idea which can perhaps be exploited:
Elements of $S(n)$ with no coordinate equal to $0$ are in bijection with
a subset (which can explicitely be described)
of $\cup_{k=\lfloor n/2\rfloor}^{n-1}S(k)$ as follows:
replace every coefficient $t_i$ of such an element $(t_1,\dots,t_n)$
by $t_i-1$ except if $t_i=1$ and $t_{i-1}>1$ in which case you remove it.</p>
|
3,277,555 | <p>For a math class I was given the assignment to make a game of chance, for my game the person must roll 4 dice and get a 6, a 5, and a 4 in a row in 3 rolls or less to qualify. the remaining dice must be over 3 for you to win. my question though is how can I find out the probability of rolling the 6,5, and 4 in a single roll? </p>
<p>My thought was <span class="math-container">$\frac{4}{24} + \frac{3}{15} + \frac{2}{8} = 0.61$</span></p>
<p>Please tell me if this is correct or if I need to do it in another method. </p>
<p>Thank you!</p>
| N. F. Taussig | 173,070 | <p>Let's say the dice are blue, green, red, and yellow. Then an outcome may be specified by <span class="math-container">$(b, g, r, y)$</span>. There are six possible outcomes for each die, so our sample space has <span class="math-container">$6^4$</span> possible outcomes.</p>
<p>For the favorable cases, there are two possibilities:</p>
<ol>
<li>Four distinct numbers are obtained, including 4, 5, and 6.</li>
<li>Only 4, 5, and 6 are obtained, with one of those numbers appearing twice.</li>
</ol>
<p><strong>Case 1:</strong> Four distinct numbers are obtained, including 4, 5, and 6.</p>
<p>There are four dice on which a number other than 4, 5, or 6 could appear and three possible outcomes that die could show. There are <span class="math-container">$3!$</span> ways for the remaining three dice to display a 4, 5, or 6. Hence, there are <span class="math-container">$$\binom{4}{1}\binom{3}{1}3!$$</span> outcomes in this case.</p>
<p><strong>Case 2:</strong> Only 4, 5, and 6 are obtained, with one of those numbers appearing twice.</p>
<p>There are three numbers that could be the one to appear twice and <span class="math-container">$\binom{4}{2}$</span> ways for two of the four dice to display that number. There are <span class="math-container">$2!$</span> ways for the remaining two dice to display the two numbers which each appear once. Hence, there are <span class="math-container">$$\binom{3}{1}\binom{4}{2}2!$$</span> outcomes in this case.</p>
<p><strong>Total:</strong> Since the two cases above are exhaustive and mutually exclusive, the number of favorable outcomes is
<span class="math-container">$$\binom{4}{1}\binom{3}{1}3! + \binom{3}{1}\binom{4}{2}2!$$</span></p>
<p><strong>Probability:</strong> The probability that the numbers 4, 5, and 6 will appear if four fair dice are rolled is
<span class="math-container">$$\frac{\dbinom{4}{1}\dbinom{3}{1}3! + \dbinom{3}{1}\dbinom{4}{2}2!}{6^4} = \frac{1}{12}$$</span>
as drhab found using the Inclusion-Exclusion Principle.</p>
|
4,005,522 | <p>Let <span class="math-container">$U$</span> be connected open set of <span class="math-container">$\mathbb{R}^{n}$</span>. Consider <span class="math-container">$C^{\infty}(U)$</span> with sup norm, we said <span class="math-container">$A\subset C^{\infty}(U)$</span> is a minimal dense subalgebra of <span class="math-container">$C^{\infty}(U)$</span> if and only if any subalgebra contain in <span class="math-container">$A$</span> is not dense in <span class="math-container">$C^{\infty}(U)$</span>. I want to ask are the polynomial algebra of <span class="math-container">$U$</span>, either with coefficients in <span class="math-container">$\mathbb{Q}$</span> or <span class="math-container">$\mathbb{R}$</span>\ <span class="math-container">$\mathbb{Q}$</span>, the only two minimal dense subalgebra of <span class="math-container">$C^{\infty}(U)$</span>?</p>
| Henno Brandsma | 4,280 | <p>Let <span class="math-container">$Y=\beta \Bbb N \setminus \{p\}$</span> for <span class="math-container">$p \in \Bbb N^\ast$</span>.
Then theorem 6.4/6.7 from Gillman and Jerrison tells us that <span class="math-container">$\beta Y=\beta \Bbb N$</span> and another standard theorem tells us that the one-point compactification of <span class="math-container">$Y$</span> also equals <span class="math-container">$\beta \Bbb N$</span>. So <span class="math-container">$Y$</span> is almost compact.</p>
<p>8L in that book also shows that <span class="math-container">$\Omega$</span> (the square of <span class="math-container">$\omega_1 +1$</span> with <span class="math-container">$(\omega_1, \omega_1)$</span> removed), is another example of an almost compact space, as is the Tychonoff plank, which is closely related. They are introduced in excercise 10R as spaces with a unique uniformity and the non-compact ones are characterised as those Tychonoff <span class="math-container">$X$</span> that have <span class="math-container">$|\beta X\setminus X| =1$</span>.</p>
|
979,144 | <p>I am searching for a formula of sum of binomial coefficients $^{n}C_{k}$ where $k$ is fixed but $n$ varies in a given range? Does any such formula exist?</p>
| Hypergeometricx | 168,053 | <p>Is this what you're looking for?
$$\sum_{n=k}^m {n\choose k}={m+1\choose {k+1}}$$</p>
|
2,771,059 | <p>This question is similar to a question I posted earlier.<br/>
<span class="math-container">$$z=\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}$$</span>
<br/>
This time I have to do the sum <span class="math-container">$z^4+z$</span><br/>
<br/>
I have used the approach I was shown in my previous question. Here is what I've done:
<span class="math-container">$$\left(\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}\right)^4+\left(\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}\right)$$</span>
<span class="math-container">$$\cos\frac{4 \pi}{3}+j\sin\frac{4 \pi}{3}+\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}$$</span>
collecting like terms...
<span class="math-container">$$\cos\frac{5\pi}{3}+2j\sin\frac{5\pi}{3}$$</span>
I verified this with wolframalpha but the answer it gave was zero. Is this approach I'm using appropriate for this problem?</p>
| José Carlos Santos | 446,262 | <p>The answer is $0$ because$$\cos\left(\frac{4\pi}3\right)=\cos\left(\pi+\frac\pi3\right)=-\cos\left(\frac\pi3\right)\text{ and }\sin\left(\frac{4\pi}3\right)=\sin\left(\pi+\frac\pi3\right)=-\sin\left(\frac\pi3\right).$$</p>
|
1,524,615 | <p>I'm trying to solve some task and I'm stuck. I suppose that I will be able to solve my problem, if I'll find elementary way to calculate $\lim_{x \to \infty}\sqrt[x-1]{\frac{x^x}{x!}}$ for $x \in \mathbb{N}_+$.<br>
My effort: I had prove, that $x! \geq (\frac{x+1}{e})^x$, so (cause $x^x>x!$):</p>
<p>$$
\left(\frac{x^x}{x!}\right)^{\frac 1 x} \leq \left(\frac{x^x}{(x+1)^x}\right)^{\frac 1 x} \cdot e \xrightarrow{x \to \infty} e
$$</p>
<p>But how can I end that proof?<br>
I will be grateful for all the advice.</p>
| mahbubweb | 289,201 | <p>Say $y=\sqrt[x-1]{\frac{x^x}{x!}}$ </p>
<p>Then, $\ln y=\frac{\ln(\frac{x^x}{x!})}{x-1}=-\frac{x}{x-1} \cdot \frac{\ln(\frac{x!}{x^x})}{x} =-\frac{x}{x-1} \cdot \sum \limits_{i=1}^{x} \ln{\frac{i}{x}}\cdot\frac{1}{x} \xrightarrow{x \to \infty} (-1) \cdot \int \limits_{0}^{1}\ln x ~dx=(-1) \cdot(-1)=1$</p>
<p>But, $\lim \limits_{x \to \infty} \ln y = \ln \left(\lim \limits_{x \to \infty}y\right)$.</p>
<p>So, $ \lim \limits_{x \to \infty}\sqrt[x-1]{\frac{x^x}{x!}} = e$</p>
|
1,987,480 | <p>This question has been bugging me for a while now and I want to know where I'm going wrong. </p>
<blockquote>
<p>There are $20$ tickets in a raffle with one prize. What should each ticket cost if the prize is \$80 and the expected gain to the organizer is \$30?</p>
</blockquote>
<p>Now I can get the right answer by adding \$80 and \$30 then dividing by the 20 tickets to get \$5.50 per ticket, but when I use the expected value equation such as $\frac{1}{20}(p-80) + \frac{19}{20}p = 30$ to find the price of a ticket I get a much larger value which is indeed incorrect. What am I doing wrong in my equation?</p>
| Graham Kemp | 135,106 | <p>If $p$ is the price per ticket, then $\frac 1{20} (p−\$80)+\frac{19}{20} p$ is the expected return for selling <em>one</em> ticket.</p>
<p>You want the expected return for selling <em>twenty</em> tickets to equal $\$30$. Fortunately the Linearity of Expectation means this is:</p>
<p>$$20\times(\frac 1{20} (p−\$80)+\frac{19}{20} p)=\$30 $$</p>
<p>This yields $p=\$5.50$</p>
|
16,584 | <p>In the definition of vertex algebra, we call the vertex operator state-field correspondence, does that mean that it is an injective map??
Are there some physical interpretations about state-field correspondence ? Or why we need state-field correspondence in physical viewpoint??
Does it have some relations to highest weight representations?</p>
| David Ben-Zvi | 582 | <p>I want to elaborate a little on Pavel's excellent answer.</p>
<p>We can think (very schematically) of local operators in an n-dimensional
field theory the following way. We have an n-1 manifold M with some additional
structures (topological, conformal, metric etc), to which our field theory assigns a vector space Z(M) of states. Given x in M and a time t in the interval,
we can ask for local operators on Z(M) at the point x and time t. This
can be visualized (following field theory axiomatics) as follows: we cross
M with an interval, and cut out a tiny ball around the point (x,t) in
this cylinder. We obtain a cobordism (with additional structure) between
M times $S^{n-1}$ and M. We can then use the field theory axioms to turn states $Z(S^{n-1})$ into operators on $Z(M)$. Physically we think of inserting measurements on fields
on spacetime M times interval that only ask about the value of fields in a small (punctured) neighborhood of (x,t).</p>
<p>In general this is a very complicated structure. But if we're in a topological field theory, then this picture is independent of lots of things - such as most importantly the size and shape of the ball we removed (as well as its position). In a 2d CFT at least we know this structure is independent of size and shape of the disc we've cut out, and depends holomorphically on z=(x,t) a point in a Riemann surface. </p>
<p>For simplicity though let's stick to TFT, since this picture works equally well in any dimension. If we apply this idea to the case $M=S^{n-1}$ itself, we find that $Z(S^{n-1})$ has an algebra structure --- in fact an algebra structure parametrized by cutting a ball out of a cylinder (if you look at this carefully you find the topologist's notion of $E_n$ algebra -- for $n=1$ it's simply associative, for $n=2$ it's "braided" (commutative in a coarse sense) and it gets more and more commutative as n increases).
Moreover Z(M) for ANY M is now a module over this algebra.</p>
<p>This is how I think of state-field correspondence: states on the n-1 sphere are equivalent to local operators in the field theory acting on any space (these are the fields). In chiral CFT we find the notion of vertex algebra immediately from this -- it's a conformal refinement of the abstract notion of $E_2$ (or "braided") algebra we derived above from TFT, where now things depend holomorphically rather than locally constantly on parameters..</p>
<p>EDIT: One more piece of data here is the unit - there's a canonical state on the (n-1) sphere, given by considering it as the boundary of the ball we cut out (ie doing the path integral on the ball with boundary conditions on the sphere..) This is the vacuum state. It's easy to see it corresponds to the identity operator on $Z(M)$ for any $M$, and is the unit for the algebra structure on $Z(S^{n-1})$. We now recover the injectivity of the state-field correspondence: we consider the pair of pants (punctured cylinder) picture above for $M=S^{n-1}$ itself, and apply a given operator $v\in Z(S^{n-1})$ to the vacuum incoming state, obtaining an outgoing state which is again v. Saying this more carefully in the 2d CFT case recovers the vacuum axiom of a vertex algebra, which Pavel explains gives injectivity of the state-field correspondence. </p>
|
1,697,206 | <p>In the figure, $BG=10$, $AG=13$, $DC=12$, and $m\angle DBC=39^\circ$.</p>
<p>Given that $AB=BC$, find $AD$ and $m\angle ABC$.</p>
<p>Here is the figure:</p>
<p><a href="https://i.stack.imgur.com/u05wa.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u05wa.jpg" alt="enter image description here"></a></p>
<p>I am inclined to say that since $\overline{AB}\simeq \overline{BC}$, both triangles share side $\overline{BD}$, and they also have a $90^\circ$ angle in common, then $AD=DC$ and $m\angle ABD=m\angle DBC=39^\circ$. However, I am not making the connection of exactly why my conclusion is true. </p>
<p>How can I show this is true without using trigonometry?</p>
<p>Thank you!</p>
| Justin Benfield | 297,916 | <p>You can use the law of sines with the given angle and length of $DC$ to find the length $BD$, then since you're given $BG$, you can find $GD$, and via pythagorean theorem, find $AD$. (Is it exactly 12 as you predicted?)</p>
|
152,336 | <p>Let $G$ be an algebraic group. Choose a Borel subgroup $B$ and
a maximal Torus $T \subset B$. Let $\Lambda$ be the set of weights wrt $T$ and let $\mathfrak{g}$ be the lie algebra of $G$.
Now, consider the following two sets,</p>
<p>1) $\Lambda^+$, the set of dominant weights wrt $B$,</p>
<p>2) The set $N_{o,r}$ of pairs $(e,r)$ (identified upto $\mathfrak{g}$ conjugacy), where e is a nilpotent element in $\mathfrak{g}$ and $r$ is an irreducible representation of the centralizer (in $\mathfrak{g}$) $Z_e$ of a nilpotent element e.</p>
<p>There is a bijective map between the two sets that plays an important
role in the representation theory of $G$ and this is often called the
Lusztig-Vogan bijection,</p>
<p>$\rho_{LV} : \Lambda^+ \rightarrow N_{o,r}$.</p>
<p>In recent works, this bijection has been studied by Ostrik,
Bezrukovnikov, Chmutova-Ostrik, Achar, Achar-Sommers (Edit: See links to refs below) using various
different tools. My question however pertains to the motivations that
point to the existence of such a bijection in the first place. As I
understand, the component group $A(O)$ where $O$ is the nilpotent orbit associated to $e$ (under the adjoint action) and a quotient of the component
group $\overline{A(O)}$ play important roles in the
algorithmic description of this bijection (say for example in determining
the map for certain $h \in \Lambda^+$, where $h$ is the Dynkin element
of a nilpotent orbit in the dual lie algebra). One of the original motivations
for the existence of such a bijection seems to have emerged from the
study of primitive ideals in the universal enveloping algebra of g. </p>
<p>My questions are the following :</p>
<ul>
<li><p>How does $\overline{A(O)}$ enter the story from the point of view of the
study of primitive ideals ?</p></li>
<li><p>Are there <em>other</em> representation theoretic motivations that point to the existence of such a bijection ? Here, I am (somewhat vaguely) counting a motivation to be 'different' if its relation to the theory of primitive ideals is nontrivial. </p></li>
</ul>
<p>[Added in Edit] Refs for some recent works on the bijection (in anti-chronological order) : </p>
<ul>
<li><p><em>Local systems on nilpotent orbits and weighted Dynkin diagrams (<a href="http://arxiv.org/abs/math/0201248">link</a>)</em> - P Achar and E Sommers</p></li>
<li><p><em>Calculating canonical distinguished involutions in the affine Weyl groups <a href="http://arxiv.org/abs/math/0106011">(link)</a></em> - T Chmutova and V Ostrik </p></li>
<li><p><em>Quasi-exceptional sets and equivariant coherent sheaves on the nilpotent cone <a href="http://arxiv.org/abs/math/0102039">(link)</a></em> - R Bezrukavnikov</p></li>
</ul>
| Jay Taylor | 22,846 | <p>This isn't even vaguely an answer to your question but is more of a clarifying remark concerning the canonical quotient. Throughout I will write [Lus84] for Lusztig's orange book "Characters of reductive groups over a finite field", Princeton University Press, 1984.</p>
<p>In what follows I will assume that $\mathbf{G}$ is a connected reductive algebraic group over an algebraic closure $\overline{\mathbb{F}_p}$ where $p$ is a good prime for $\mathbf{G}$. Furthermore, I will denote by $F : \mathbf{G} \to \mathbf{G}$ a Frobenius endomorphism and $G = \mathbf{G}^F$ the corresponding finite reductive group.</p>
<p>Let us denote by $\mathcal{E}(G,1)$ Lusztig's set of unipotent characters. These are defined to be all irreducible characters occuring in a Deligne–Lusztig virtual character $R_{\mathbf{T}}^{\mathbf{G}}(1)$ where $\mathbf{T}$ is an $F$-stable maximal torus of $\mathbf{G}$ and 1 denotes the trivial character. In [Lus84] Lusztig has defined a partitioning of $\mathcal{E}(G,1)$ into what he calls families. These are naturally in bijection with the 2-sided cells of the corresponding Weyl group of $\mathbf{G}$.</p>
<p>Now, to each family $\mathcal{F} \subseteq \mathcal{E}(G,1)$ Lusztig has defined on a case by case basis (see Chapter 4 of [Lus84]) a small finite group $\mathcal{G_F}$. This group, and its irreducible characters, plays an important role in the representation theory of $G$. In particular, this is used to not only give a labelling to the irreducible characters in $\mathcal{F}$ but also to determine the multiplicity of $\chi \in \mathcal{F}$ in the $R_{\mathbf{T}}^{\mathbf{G}}(1)$'s. However one would like a more natural interpretation for this finite group.</p>
<p>Using Lusztig–Macdonald–Spaltenstein induction and the Springer correspondence Lusztig has associated to every family $\mathcal{F} \subseteq \mathcal{E}(G,1)$ an $F$-stable unipotent class $\mathcal{O}_{\mathcal{F}}$ of $\mathbf{G}$. This turns out to be the unipotent support of the characters in $\mathcal{F}$ (see Lusztig, "A unipotent support for irreducible representations", Adv. Math., 1992). What Lusztig saw (see Chapter 14 of [Lus84]) was that the small finite group $\mathcal{G_F}$ is not exactly the component group $A(\mathcal{O}_{\mathcal{F}})$ but it is a quotient of this group, namely Lusztig's canonical quotient group $\overline{A}(\mathcal{O}_{\mathcal{F}})$.</p>
<p>This is quite vague but I hope it gives a bit more of an idea for the origins of Lusztig's canonical quotient.</p>
|
747,789 | <p>I've been reading some basic classical algebraic geometry, and some authors choose to define the more general algebraic sets as the locus of points in affine/projective space satisfying a finite collection of polynomials $f_1, \dots, f_m$ in $n$ variables without any more restrictions. Then they define an algebraic variety as an algebraic set where $(f_1, \dots, f_m)$ is a prime ideal in $k[x_1, \dots, x_n]$. </p>
<p>My question has two parts: </p>
<ol>
<li><p>I'm guessing the distinction is like any other area of math where you try to break things up into the "irreducible" case and deduce the general case from patching those together. How does that happen with varieties and algebraic sets? Is it correct to conclude that every algebraic set is somehow built from algebraic varieties since the ideal $(f_1, \dots, f_m)$ is contained in some prime (maximal) ideal? </p></li>
<li><p>How can one tell whether or not an algebraic set is a variety intuitively? I know formally you'd have to prove $(f_1, \dots, f_m)$ is prime (or perhaps there are some useful theorems out there?), but many times in texts the author simply states something is a variety without any justification. Is there a way to sort of "eye-ball" varieties in the sense that there are tell-tale signs of algebraic sets which are not varieties? </p></li>
</ol>
<p>Perhaps this is all a moot discussion since modern algebraic geometry is done with schemes and this is perhaps a petty discussion in light of that, but nonetheless, I'd like to understand the foundations before pursuing that.</p>
<p>Thanks. </p>
| Jared | 65,034 | <p>It is true that every algebraic set is a finite union of algebraic varieties (irreducible algebraic sets), and this union is unique up to reordering. These irreducible pieces of an algebraic set are called the irreducible components. This all follows from the fact that a polynomial ring over a field is Noetherian, so that an algebraic set with the Zariski topology is a Noetherian topological space.</p>
<p>As an example, I always think of the algebraic set defined by the ideal $(xz,yz),$ which is not prime. The real picture of this algebraic set is a line through a plane, and these two objects are exactly the irreducible components of the algebraic set. Here is the picture:</p>
<p>$\hspace{2.2in}$<img src="https://i.stack.imgur.com/PSjkH.jpg" alt="enter image description here"></p>
<p>The components are defined by the prime ideals $(z)$ and $(x,y)$ which are the two minimal prime ideals containing $(xz,yz)$. This may be the eyeball test you desire, as most people would look at this set and say it is made of two parts. In general, the irreducible components of an algebraic set defined by an ideal $I$ correspond exactly to the minimal prime ideals containing $I$.</p>
<p>Concerning your second question, it is not easy in general to determine when an ideal is prime. I asked a question <a href="https://math.stackexchange.com/questions/732816/techniques-for-showing-an-ideal-in-kx-1-ldots-x-n-is-prime">here</a> seeking different techniques to detect when ideals are prime. It is often easier to see that an ideal is not prime, as in the example I've given.</p>
|
3,112,682 | <p>I was looking at</p>
<blockquote>
<p><em>Izzo, Alexander J.</em>, <a href="http://dx.doi.org/10.2307/2159282" rel="nofollow noreferrer"><strong>A functional analysis proof of the existence of Haar measure on locally compact Abelian groups</strong></a>, Proc. Am. Math. Soc. 115, No. 2, 581-583 (1992). <a href="https://zbmath.org/?q=an:0777.28006" rel="nofollow noreferrer">ZBL0777.28006</a>.</p>
</blockquote>
<p>which proves existence of the Haar-measure for locally compact abelian groups using the Markov-Kakutani theorem. </p>
<p>What I find strange is that the Haar measure is constructed as an element of the dual of <span class="math-container">$C_c(X)$</span>. But for noncompact <span class="math-container">$X$</span> (such as <span class="math-container">$X$</span> being the real numbers <span class="math-container">$\Bbb R$</span>) this must be an unbounded functional (as the Lebesgue-measure on <span class="math-container">$\Bbb R$</span> is not finite). It seems like the author has no problem with this, and (without mentioning it further) goes on to define a weak-* topology for this case and even uses Banach-Alaoglu.</p>
<p>I have not seen this being done this way before, am I misunderstanding something or can one define a weak-* topology on the algebraic dual of a TVS without any problems?</p>
| Aniket Sharma | 639,028 | <p><img src="https://latex.codecogs.com/gif.latex?x%5E%7B2%7D-2xcos%5CTheta&space;+1&space;=&space;0&space;%5CRightarrow&space;cos%5E%7B2%7D%5CTheta" title="x^{2}-2xcos\Theta +1 = 0 \Rightarrow cos^{2}\Theta" /> - 1 has to be positive for solving the quadratic equation with real roots</p>
<p>this gives <img src="https://latex.codecogs.com/gif.latex?%5CTheta" title="\Theta" /> = zero</p>
<p>Hence x = 1</p>
|
3,112,682 | <p>I was looking at</p>
<blockquote>
<p><em>Izzo, Alexander J.</em>, <a href="http://dx.doi.org/10.2307/2159282" rel="nofollow noreferrer"><strong>A functional analysis proof of the existence of Haar measure on locally compact Abelian groups</strong></a>, Proc. Am. Math. Soc. 115, No. 2, 581-583 (1992). <a href="https://zbmath.org/?q=an:0777.28006" rel="nofollow noreferrer">ZBL0777.28006</a>.</p>
</blockquote>
<p>which proves existence of the Haar-measure for locally compact abelian groups using the Markov-Kakutani theorem. </p>
<p>What I find strange is that the Haar measure is constructed as an element of the dual of <span class="math-container">$C_c(X)$</span>. But for noncompact <span class="math-container">$X$</span> (such as <span class="math-container">$X$</span> being the real numbers <span class="math-container">$\Bbb R$</span>) this must be an unbounded functional (as the Lebesgue-measure on <span class="math-container">$\Bbb R$</span> is not finite). It seems like the author has no problem with this, and (without mentioning it further) goes on to define a weak-* topology for this case and even uses Banach-Alaoglu.</p>
<p>I have not seen this being done this way before, am I misunderstanding something or can one define a weak-* topology on the algebraic dual of a TVS without any problems?</p>
| Swapnil Rustagi | 182,381 | <p>Simply consider it a quadratic equation in <span class="math-container">$x$</span>.</p>
<p><span class="math-container">$x^2 + 1 = 2x\cos\theta $</span></p>
<p><span class="math-container">$x^2 - 2x\cos\theta + 1 = 0 $</span>
By quadratic formula, </p>
<p><span class="math-container">$$x = \frac{2\cos\theta \pm \sqrt{4\cos^2\theta - 4} }{2} = \cos\theta \pm \sqrt{\cos^2\theta - 1} = \cos\theta \pm i\sin\theta = e^{\pm i\theta }$$</span></p>
<p><span class="math-container">$$ \frac{1}{x} = e^{\mp i\theta }$$</span></p>
<p><span class="math-container">$$ x^n = e^{\pm in\theta } = \cos(n\theta) \pm i\sin(n\theta)$$</span>
<span class="math-container">$$ \frac{1}{x^n} = e^{\mp in\theta } = \cos(n\theta) \mp i\sin(n\theta)$$</span></p>
<p><span class="math-container">$$ x^n +\frac{1}{x^n} = 2 \cos(n\theta) $$</span></p>
|
1,811,612 | <p>We have $5$ normal dice. What is the chance to get five $6$'s if you can roll the dice that do not show a 6 one more time (if you do get a die with a $6$, you can leave it and roll the others one more time. Example: first roll $6$ $5$ $1$ $2$ $3$, we will roll $4$ dice and hope for four $6$s or if we get $6$ $6$ $2$ $3$ $3$ we will roll three dice one more time). I tried to calculate if you get $1$, $2$, $3$, $4$ dice with $6$ but I don't know how to "sum" the cases.</p>
| Felicity | 332,295 | <p>In a very long but straightforward way, we can calculate the probability by breaking it down into scenarios by how many $6$'s appear on the first roll and calculate this probability.</p>
<p><strong>Case 1:</strong> Five $6$'s appear on the first roll: Event $A$</p>
<p>$$P(A)=\left(\dfrac{1}{6}\right)^5$$</p>
<p><strong>Case 2:</strong> Four $6$'s appear on the first roll and we roll a $6$ for the remaining one die: Event $B$</p>
<p>$$P(B)=\binom{5}{1}\left(\dfrac{1}{6}\right)^4\left(\dfrac{5}{6}\right)\cdot\left(\dfrac{1}{6}\right)$$</p>
<p><strong>Case 3:</strong> Three $6$'s appear on the first roll and we roll two $6$'s for the remaining two dice: Event $C$</p>
<p>$$P(C)=\binom{5}{2}\left(\dfrac{1}{6}\right)^3\left(\dfrac{5}{6}\right)^2\cdot\left(\dfrac{1}{6}\right)^2$$</p>
<p>and we have the other three cases (two sixes on the first roll and three sixes on the second, a six on the first roll and four sixes on the second, and none on the first and five on the second). But by this point we can see a pattern in the probabilities of the cases.</p>
<p>Indeed, the probability of the event $X$ which you are asking is the sum of the probabilities of these six cases.</p>
<p>In other words, $$P(X)=P(A)+P(B)+\cdot\cdot\cdot+P(F)=\sum_{n=0}^5 \binom{5}{n}\left(\dfrac{1}{6}\right)^5\left(\dfrac{5}{6}\right)^n=\dfrac{161051}{60466176}$$</p>
|
221,729 | <p>Till now, I have proved followings;</p>
<p>Suppose $X,Y$ are metric spaces and $E$ is dense in $X$ and $f:E\rightarrow Y$ is uniformly continuous. Then,</p>
<ol>
<li><p>$Y=\mathbb{R}^k \Rightarrow \exists$ a continuous extension.</p></li>
<li><p>$Y$ is compact $\Rightarrow \exists$ a continuous extension.</p></li>
<li><p>$Y$ is complete $\Rightarrow \exists$ a continuous extension. (AC$_\omega$)</p></li>
<li><p>$E$ is countable & $Y$ is complete $\Rightarrow \exists$ a continuous extension.</p></li>
</ol>
<p>What are true and what are false if $f$ is replaced by a 'continuous function', not uniformly?</p>
| Austin Mohr | 11,245 | <p><a href="http://books.google.com/books/about/Concrete_mathematics.html?id=pntQAAAAMAAJ" rel="nofollow">Concrete Mathematics</a> is a good place to start with asymptotics and related ideas, particularly if you are in computer science (which your tags suggest).</p>
|
978,114 | <p>From $ax\geq 0$ for $a>0$, we have $x\geq 0$. So I suggest that if $Ax\geq 0$ for $A$ positive definite matrix, $x$ a column vector, $0$ is the column vector with $0$ as elements, then $x\geq 0$, that is, the coordinate of $x$ is greater than $0$.</p>
<p>However, I could not prove it...</p>
| Alfred Chern | 42,820 | <p>If $Ax\geq0$ for arbitrary positive definite matrix $A$, then the conclusion is right just by take $A=I$.
If $Ax\geq0$ for a fixed positive definite matrix $A$, then
$A=\left(
\begin{array}{cc}
1 & 1 \\
1 & 4 \\
\end{array}
\right)$
$x=\left(
\begin{array}{c}
-1 \\
1 \\
\end{array}
\right)$
is a example show that the conclusion is not true.</p>
|
2,721,992 | <p>I would like to see the fact that the components of a vector transform differently (controvariant transformation) than the unit bases vectors (covariant transformation) for the specific case of cartesian to polar coordinate transformation. </p>
<p>The polar unit vectors $\hat{r}$ and $\hat{\theta}$ can be expressed in terms of cartesian unit vectors, $\hat{x}$ and $\hat{y}$, as the following
\begin{equation}
\hat{r}= \text{cos}\phi \ \hat{x} + \text{sin}\phi \ \hat{y} \\
\hat{\theta}= -\text{sin}\phi \ \hat{x} + \text{cos}\phi \ \hat{y} \tag{1}
\end{equation} </p>
<p>Any vector, $\vec{V}$, can be expressed in the cartesian coordinate system as $\vec{V}=V_x \ \hat{x} + V_y \ \hat{y}$. The same vector can be expressed in polar coordinates as $\vec{V}=V_r \ \hat{r} + V_\theta \ \hat{\theta}$. We then have
\begin{equation}
V_x \ \hat{x} + V_y \ \hat{y}=V_r \ \hat{r} + V_\theta \ \hat{\theta}. \tag{2}
\end{equation}
I then project both sides of (2) once onto $\hat{r}$, and once onto $\hat{\theta}$. Using (1) and (2) we get
\begin{equation}
V_r= \text{cos}\phi \ V_x+\text{sin}\phi \ V_y \\
V_\theta= -\text{sin}\phi \ V_x+\text{cos}\phi \ V_y \tag{3}
\end{equation}</p>
<p>Comparing (1) and (3), both the unit vectors and the components of a vector are transforming with the same rule, which is a contradiction! What am I missing here?</p>
| Ash | 114,080 | <p>I think that your answer is unnecessarily complicated for this question. In matrix notation equation (1) in the question is</p>
<p><span class="math-container">\begin{equation}
\begin{pmatrix}
\hat{r} && \hat{\theta}
\end{pmatrix}
=\begin{pmatrix}
\hat{x} && \hat{y}
\end{pmatrix}
\begin{pmatrix}
cos\phi && -sin\phi \\
sin\phi && cos\phi
\end{pmatrix}
\triangleq
\begin{pmatrix}
\hat{x} && \hat{y}
\end{pmatrix}
M
\end{equation}</span></p>
<p>and equation (3) expresses the transformation rule for a row (covariant) vector</p>
<p><span class="math-container">\begin{equation}
\begin{pmatrix}
v_r && v_{\theta}
\end{pmatrix}=
\begin{pmatrix}
v_x && v_y
\end{pmatrix}
M
\end{equation}</span></p>
<p>so the transformation for a column (thus contravariant) vector is</p>
<p><span class="math-container">\begin{equation}
\begin{pmatrix}
v_r \\
v_{\theta}
\end{pmatrix}
=M^T
\begin{pmatrix}v_x\\
v_y\end{pmatrix}
\end{equation}</span></p>
<p>Which is indeed the inverse of <span class="math-container">$M$</span> as the latter is orthogonal.</p>
|
163,296 | <p>For positive real numbers $x_1,x_2,\ldots,x_n$ and any $1\leq r\leq n$ let $A_r$ and $G_r$ be , respectively, the arithmetic mean and geometric mean of $x_1,x_2,\ldots,x_r$.</p>
<p>Is it true that the arithmetic mean of $G_1,G_2,\ldots,G_n$ is never greater then the geometric mean of $A_1,A_2,\ldots,A_n$ ?</p>
<p>It is obvious for $n=2$, and i have a (rather cumbersome) proof for $n=3$.</p>
| Andrew | 11,265 | <p>It's a special case ($r=0$, $s=1$) of the mixed means inequality
$$
M_n^s[M^r[\bar a]]\le M_n^r[M^s[\bar a]], \quad r,s\in \mathbb R,\ r<s,
$$
where $M^s$ is the power mean with exponent $s$, see <a href="http://books.google.ru/books?id=ycExBYeCnu4C&printsec=frontcover&hl=ru#v=onepage&q&f=false">Survey on Classical Inequalities</a>, p. 32, theorem 2.</p>
|
1,444,820 | <p>I want to solve the following funktion for $x$, is that possible? And how woult it look like?</p>
<p>$y = xp -qx^{2}$</p>
<p>Thanks for Help!</p>
| Jack D'Aurizio | 44,121 | <p>There is no solution. Polynomials are a dense subset of $L^2(0,1)$ or $C^0(0,1)$. The only working choice is $f(x)=\delta(x-1/2)$, but it is a distribution, not a polynomial.</p>
|
1,780,253 | <p>If I have two points $p_1, p_2$ uniformly randomly selected in the unit ball, how can I calculate the probability that one of them is closer to the center of the ball than the distance between the two points?</p>
<p>I know how to calculate the distribution of the distance between two random points in the ball, same for one point from the center, but I'm not sure how to use the two distributions to get what I'm after.</p>
| Amit Bikram | 509,096 | <p>For one of the points to be closer to the center than the other point, both points should lie outside the region of sphere which subtends a solid angle of
2*pi*(1-cos(a)).</p>
<p>Where a= 1 radian (180/pi).</p>
<p>and solid angle subtended at the center by the entire sphere is 4pi.
Hence, required probability is 1-[{2*2*pi(1-cos(a))}/{4*pi}]. Which gives the value 0.5403. </p>
|
129,530 | <p>This book, which needs to be returned quite soon, has a problem I don't know where to start. How do I find a 4 parameter solution to the equation</p>
<p>$x^2+axy+by^2=u^2+auv+bv^2$</p>
<p>The title of the section this problem comes from is entitled (as this question is titled) "Numbers of the Form $x^2+axy+by^2$", yet it deals almost exclusively with numbers of the form $x^2+y^2$. It looks like almost an afterthought or a preview of what's to come where it gives the formula</p>
<p>$(m^2+amn+bn^2)(p^2+apq+bq^2)=r^2+ars+bs^2,r=mp-bnq,s=np+mq+anq$</p>
<p>Then 6 of the 7 problems use this form. The first few involve solving the form $z^k=x^2+axy+by^2$, which I quickly figured out are solved by letting $z=u^2+auv+bv^2$, then using the above formula to get higher powers. So for $z^2$ for example, I set $m=p=u$ and $n=q=v$ to get $x$ and $y$ in terms of $u$ and $v$. But for this problem, I'm drawing a blank.</p>
| zyx | 14,120 | <p>Over a field the space of rational solutions is three dimensional. Integer solutions can be formed as multiples of rational solutions and maybe this multiplication factor is the fourth parameter but it is not clear what the problem is asking. The number of parameters can always be increased from a known parametrization by having some of the parameters be arbitrary functions of several new parameters but this is not natural.</p>
<p>The problem asks for arbitrary pairs of points on a conic of the form $Q(x,y)=R$ where $Q(x,y)=x^2 + axy + by^2$. It is not specified but probably intended that the form $Q$ is fixed and $R$ is variable.</p>
<p>For any 3 numbers $(u,v,t)$ with $u$ and $v$ not both zero, the line through $(u,v)$ of slope $t$ intersects the conic $Q(x,y)=R$ with $R=Q(u,v)$ at a second point whose coordinates $(x,y)$ are rational functions of $(u,v,t)$. This is a 3-dimensional family of rational solutions and this is the best one can do if parameters means dimensionality of the family. </p>
|
3,056,616 | <p><span class="math-container">$P(x) = 0$</span> is a polynomial equation having <strong>at least one</strong> integer root, where <span class="math-container">$P(x)$</span> is a polynomial of degree five and having integer coefficients. If <span class="math-container">$P(2) = 3$</span> and <span class="math-container">$P(10)= 11$</span>, then prove that the equation <span class="math-container">$P(x) = 0$</span> has <strong>exactly one</strong> integer root.</p>
<p>I tried by assuming a fifth degree polynomial but got stuck after that.</p>
<p>The question was asked by my friend.</p>
| Bill Dubuque | 242 | <p><a href="https://math.stackexchange.com/a/617426/242"><strong>Key Idea</strong> <span class="math-container">$\ $</span> (Kronecker)</a> <span class="math-container">$ $</span> How polynomials can factor is constrained by how their <em>values</em> factor, <span class="math-container">$ $</span> e.g. as below, in some cases if <span class="math-container">$\,P\,$</span> takes a prime value then it has at most one integer root.</p>
<p><strong>Hint</strong> <span class="math-container">$ $</span> If <span class="math-container">$\,P\,$</span> has more roots than <span class="math-container">$\,P(2)\,$</span> has prime factors then factoring <span class="math-container">$P$</span> & evaluating at <span class="math-container">$x\!=\!2$</span> <span class="math-container">$\,\Rightarrow\,P(1)\!=\!0\,$</span> or <span class="math-container">$P(3)\!=\!0.\,$</span> But <span class="math-container">$P(1)\!\neq\! 0\,$</span> else <span class="math-container">$\,10\!-\!1\mid P(10)\!-\!P(1) = 11.\,$</span> <span class="math-container">$P(3)\!\neq\! 0\,$</span> similarly.</p>
<p><strong>Theorem</strong> <span class="math-container">$ $</span> Suppose <span class="math-container">$P(x)$</span> is a polynomial with integer coefficients and <span class="math-container">$a$</span> is an integer with <span class="math-container">$\,P(a)\neq 0\,$</span> and there exists an integer <span class="math-container">$b$</span> such that neither of <span class="math-container">$\,b\!-\!a\pm 1$</span> divides <span class="math-container">$P(b).$</span></p>
<p><span class="math-container">$$\begin{align} {\rm Then}\ \ &P(a)\,\ \text{has $\,\ k\,\ $ prime factors (counting multiplicity)}\\
\Longrightarrow\ \ &P(x)\, \text{ has $\le\! k\,$ integer roots (counting multiplicty)}
\end{align}\qquad $$</span></p>
<p><strong>Proof</strong> <span class="math-container">$ $</span> If not then <span class="math-container">$P$</span> has at least <span class="math-container">$\,k+1\,$</span> roots <span class="math-container">$\,r_i\,$</span> so iterating the Factor Theorem yields
<span class="math-container">$$\,P(x) = (x-r_0)\cdots (x-r_k)\,q(x)\qquad$$</span></p>
<p>for a polynomial <span class="math-container">$\,q(x)\,$</span> with integer coefficients. Evaluating above at <span class="math-container">$\,x = a\,$</span> yields </p>
<p><span class="math-container">$$\,P(a) = (a-r_0)\cdots (a-r_k)\,q(a)\qquad$$</span></p>
<p>If all <span class="math-container">$\,a-r_i\neq \pm1\,$</span> then they all have a prime factor yielding at least <span class="math-container">$k+1$</span> prime factors on the RHS, contra LHS <span class="math-container">$\,P(a)\,$</span> has <span class="math-container">$\,k\,$</span> prime factors (prime factorizations are <em>unique</em>). So some <span class="math-container">$\,a-r_j = \pm1\,$</span> so <span class="math-container">$\,r_j = a\pm 1.\,$</span> Evaluating at <span class="math-container">$\, x = b\,$</span> yields <span class="math-container">$\,b-r_j = b-a\pm1\,$</span> divides <span class="math-container">$\, P(b),\,$</span> contra hypothesis. </p>
|
2,917,896 | <p>I think my proof is wrong but I don't know how to approach the statement differently. I hope you can help me identify where I'm mistaken/incomplete.</p>
<p>Proof:
$$\text{We need to prove: } \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2, 6] $$</p>
<p>$$\text{Thus, } x \in \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \iff x \in [2, 6]$$</p>
<p>$$\text{We first consider the converse of the biconditional.}$$</p>
<p>$$\text{and proceed by contrapositive.} $$
$$x \notin \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \implies x \notin [2, 6]$$
$$\text{Given that when } n = 1, [3-\frac{1}{n}, 6]=[2,6] \text{ and } $$
$$ \forall z \in (\mathbb{N} - {1}) , [3-\frac{1}{z}, 6] < [2, 6] \text{ thus } \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2,6]$$
$$\text{It follows that, } x \notin [2,6] \text{. Thus the converse is true.}$$</p>
<p>$$\text{Now, for left to right } (\implies) \text{ we proceed by direct proof. }$$</p>
<p>$$x \in \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \implies x \in [2, 6]$$
$$\text{By the same logic as for the converse, we continue..}$$</p>
<p>$$\text{Given that, when } n = 1, [3-\frac{1}{n}, 6] = [2, 6], \text{ It follows that: } $$
$$x \in [2,6]$$</p>
<p>$$\therefore \bigcup_{n=1}^{\infty} A_{n} = [2, 6] \text{ } \blacksquare$$ </p>
<p>Thank you for your time.</p>
<hr>
<p><strong>Updated proof:</strong></p>
<p>Proof: </p>
<p>We assume $x \in \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$
$$A_{1} = [2, 6] > \bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6] *$$
$$\therefore A_{1} = \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] , \space x \in [2,6]$$</p>
<p>[ I placed a (*) to show where I'm uncertain.
My problem is in knowing how much I should explain to the reader. I have to establish somehow that $A_{1}$ is the biggest interval but I kind of leave open 'why' $\bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6]$ is true. For example, I thought I had to show why $3 - \frac{1}{i} > 2$ for every i $\geq$ 2. So I have a tedency to break everything down too much]</p>
<p>Now for the converse we proceed by contrapositive.</p>
<p>We assume $x \notin \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$
$$A_{1} = [2, 6] > \bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6] *$$
$$\therefore A_{1} = \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] , \space x \notin [2,6]$$</p>
<p>$$ \therefore \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2, 6] \blacksquare$$</p>
<p><strong><em>Updated proof #2:</em></strong></p>
<p>Proof:</p>
<p>We assume, $x \in \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$.</p>
<p>Since $2 \leq 3 - \frac{1}{n} < 3$ for all $ n \geq 1$,
$ \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] \subseteq [2, 6], x \in [2, 6]$ </p>
<p>For the converse we assume $x \notin \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$. </p>
<p>Following the same reasoning as above,
Since $2 \leq 3 - \frac{1}{n} < 3$ for all $ n \geq 1$, $ \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] \subseteq [2, 6], x \notin [2, 6]$ </p>
<p>$\therefore \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] \space \blacksquare$ </p>
| xbh | 514,490 | <p>Not to the question but to the updated proof: </p>
<p>You have not yet proved
$$
\bigcup_1^\infty \left[3 -\frac 1n, 6\right] = [2,6],
$$
so in your proof such equation is definitely not allowed to appear. Also I still do not clearly get your logic inferences in your proof, i.e. I do not see the reasoning part. Here I write a demonstration.</p>
<h3>Demo proof</h3>
<p>$\blacktriangleleft$ We show that
$$
x \in \bigcup_1^\infty \left[3 - \frac 1n, 6\right] \iff x \in [2,6].
$$</p>
<p>$\implies$ part:</p>
<p>Assume $x \in \bigcup_1^\infty [3-1/n, 6]$, then there exists an $m \in \mathbb N^*$ s.t. $x \in [3-1/m, 6]$ [This is the definition of union]. Since for all $n\in \Bbb N^*$, $[3 -1/n, 6] \subseteq [2,6]$, we get $x \in [2,6]$ as well. </p>
<p>$\impliedby$ part:</p>
<p>Proceed by contrapositive. Suppose $x \notin \bigcup_1^n [3-1/n, 6]$, then $x \notin [3-1/n, 6]$ for all $n \in \Bbb N^*$, then particularly $x \notin [3-1,6] = [2,6]$. </p>
<p>Combined the results we conclude that
$$
\bigcup_1^\infty \left[3 - \frac 1n, 6\right] = [2,6]. \blacktriangleright
$$</p>
|
2,773,515 | <p>Given $X_1 \sim \exp(\lambda_1)$ and $X_2 \sim \exp(\lambda_2)$, and that they are independent, how can I calculate the probability density function of $X_1+X_2$? </p>
<hr>
<p>I tried to define $Z=X_1+X_2$ and then: $f_Z(z)=\int_{-\infty}^\infty f_{Z,X_1}(z,x) \, dx = \int_0^\infty f_{Z,X_1}(z,x) \, dx$.<br>
And I don't know how to continue from this point.</p>
| TheSimpliFire | 471,884 | <p><strong>HINT:</strong></p>
<p>Assuming independence.</p>
<p>We are given that the P.D.F. of $X$ is $f_X(x)=\lambda e^{-\lambda x},\, x\ge0$ and the P.D.F. of $Y$ is $f_Y(Y)=\lambda e^{-\lambda y},\, y\ge0$. </p>
<p>Then using convolution, $$\begin{align}f_{X+Y}(x+y)&=\int_{-\infty}^\infty f_X(x+y-y)f_Y(y)\,dy\\&=\int_0^{x+y}\lambda e^{-\lambda(x+y)}\lambda e^{-\lambda y}\,dy\end{align}$$ which should now to easy to integrate.</p>
|
1,691,306 | <p>Find all pairs of values $a$ and $b$ that satisfy $(a+bi)^2 = 48 + 14i$</p>
<p>Here's what I have so far:</p>
<p>$$\begin{align}
z^2 &= 48 + 14i = 50 \operatorname{cis} 0.2837\\
z &= \sqrt{50} \operatorname{cis} 0.1419 = 7 + i \\
z &= \sqrt{50} \operatorname{cis} 3.2834 = -7 - i\\
a &= ± 7 \\
b &= ± 1
\end{align} $$</p>
<p>What are the other solutions, and how do I find them?</p>
| Jan Eerland | 226,665 | <p>$$\left(a+bi\right)^2=48+14i\Longleftrightarrow$$
$$\left(a+bi\right)\left(a+bi\right)=48+14i\Longleftrightarrow$$
$$a^2-b^2+2abi=48+14i$$</p>
<p>Now, see that:</p>
<ul>
<li>$$\Re\left(a^2-b^2+2abi\right)=48\Longleftrightarrow a^2-b^2=48$$</li>
<li>$$\Im\left(a^2-b^2+2abi\right)=14\Longleftrightarrow 2ab=14\Longleftrightarrow ab=7$$
<hr></li>
</ul>
<p>Now, solve this system for the real and complex solutions:</p>
<p>$$
\begin{cases}
a^2-b^2=48\\
ab=7
\end{cases}\Longleftrightarrow
\begin{cases}
a^2-b^2=48\\
b=\frac{7}{a}
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
a^2-\left(\frac{7}{a}\right)^2=48\\
b=\frac{7}{a}
\end{cases}\Longleftrightarrow
\begin{cases}
a^2-\frac{49}{a^2}=48\\
b=\frac{7}{a}
\end{cases}
$$</p>
|
518,140 | <p>What is the relation between the definition of homotopy of two functions</p>
<blockquote>
<p>"A homotopy between two continuous functions $f$ and $g$ from a topological space $X$ to a topological space $Y$ is defined to be a continuous function $H : X × [0,1] → Y$ from the product of the space $X$ with the unit interval $[0,1]$ to $Y$ such that, if $x \in X$ then $H(x,0) = f(x)$ and $H(x,1) = g(x)$".</p>
</blockquote>
<p>and the definition of the homotopy between two morphisms of chain complexes</p>
<blockquote>
<p>"Let $A$ be an additive category. The homotopy category $K(A)$ is based on the following definition: if we have complexes $A, B$ and maps $f, g$ from $A$ to $B$, a chain homotopy from $f$ to $g$ is a collection of maps $h^n \colon A^n \to B^{n - 1}$ (not a map of complexes) such that
$f^n - g^n = d_B^{n - 1} h^n + h^{n + 1} d_A^n$, or simply $f - g = d_B h + h d_A$." </p>
</blockquote>
<p>Please help me. Thank you!</p>
| Mikhail Katz | 72,694 | <p>The Cusanus (Nicolas of Cusa) (1401-1464) made some observations that proved to be instrumental in the development of the calculus. Historians of mathematics credit him with the "bridge of continuity", or the "principle of continuity", an instance of which is his view of a circle as an infinite-sided polygon. This idea was used effectively by Kepler in solving problems in astronomy (in particular the area law). The principle of continuity could have been the inspiration for Leibniz's law of continuity: "what succeeds in the finite, succeeds in the infinite, as well", which found a precise mathematical expression in the transfer principle of Abraham Robinson in the 1960s.</p>
|
105,614 | <p>I have the following problem: Let $A, B\subset R^3$, $A$ is homeomorphic to a ball, while $B$ is a standard Euclidean ball. Can it happen that the fundamental group of $A\setminus B$ is a perfect group? I am interested in answers for $A$ and $B$ both closed and open, so in fact this is 4 questions. </p>
<p>I am aware of disturbing examples like the infinite grope or the complement of the Alexander's horned sphere, but I still strongly believe that the answer should be no.</p>
| Roberto Frigerio | 6,206 | <p>Let me deal with the following very special case: the closure of $B$ is contained in the internal part of $A$, and $A$ is bounded. In this case, using that $\partial A$ is compact one can show that there exists a standard open ball $B'$ which has the same center as $B$ but a strictly larger radius, and is such that the closure of $B'$ is contained in the internal part of $A$. Then, one may compute the fundamental group of $A$ by applying Van Kampen Theorem to the open covering $B'$, $A\setminus B$. Since $A$ and $(A\setminus B)\cap B'=B'\setminus B$ are simply connected, we easily get that $A\setminus B$ is simply connected too.</p>
|
105,614 | <p>I have the following problem: Let $A, B\subset R^3$, $A$ is homeomorphic to a ball, while $B$ is a standard Euclidean ball. Can it happen that the fundamental group of $A\setminus B$ is a perfect group? I am interested in answers for $A$ and $B$ both closed and open, so in fact this is 4 questions. </p>
<p>I am aware of disturbing examples like the infinite grope or the complement of the Alexander's horned sphere, but I still strongly believe that the answer should be no.</p>
| Ian Agol | 1,345 | <p>Consider a smooth properly embedded surface $P\subset \mathbb{R}^3$. Then $\mathbb{R}^3= X\cup Y$, where $X\cap Y=P$ and $X, Y$ are properly embedded submanifolds with $\partial X=\partial Y=P$. By Mayer-Vietoris, we have an exact sequence $0=H_2(\mathbb{R}^3)\to H_1(P)\to H_1(X)\oplus H_1(Y)\to H_1(\mathbb{R}^3)=0$, so we see that $H_1(X)\oplus H_1(Y)\cong H_1(P) \neq 0$ unless $P$ is a union of smoothly properly embedded planes and spheres. Therefore at least one component of $\mathbb{R}^3\backslash P$ does not have perfect fundamental group, or else $P$ is a union of planes and spheres (since $P$ is smoothly properly embedded, there's a nice collar neighborhood, so $H_i(X)\cong H_i(int(X))$, and same for $Y$). In the case that $P$ is a union of properly embedded planes and 2-spheres, by Seifert-Van Kampen's theorem, each $\pi_1(X,x)$ and $\pi_1(Y,y)$, for $x\in X, y\in Y$ injects into $\pi_1(\mathbb{R}^3)$, so is trivial. </p>
<p>Let's apply this to your situation. I'll consider the case of $int(A)\backslash B$, since there's not issue of local connectivity for an open set. Then $P=\partial B \cap int(A)\subset int(A)\cong \mathbb{R}^3$ is a properly embedded smooth surface, so we have either there is a component of $int(A)\backslash B$ which does not have perfect fundamental group, or $P$ is a union of properly embedded planes in $int(A)$ (or a sphere), in which case each complementary region has trivial fundamental group. I think this answers at least one interpretation of your question. </p>
|
230,887 | <p>Let $(F^\bullet,d_F)$ and $(G^\bullet,d_G)$ be two complexes in an abelian category $\mathbf{A}$.</p>
<p>The complex cone $Cone(\varphi)^\bullet$ of a morphism of complexes $\varphi:F^\bullet \to G^\bullet$ is defined as</p>
<p>$$Cone(\varphi)^i=G^i\oplus F^{i+1},$$</p>
<p>and its differential is</p>
<p>$$d(g^i,f^{i+1})=(d_G(g^i)+\varphi(f^{i+1}),-d_F(f^{i+1})).$$</p>
<p>then there are natural maps $G^\bullet \to Cone(\varphi)^\bullet$ and $Cone(\varphi)^\bullet \to F[1]^\bullet$ that make</p>
<p>$$F\to G\to Cone(\varphi) \to F[1]$$</p>
<p>into a distinguished triangle inside the derived category $\mathbf{D}^b(\mathbf{A})$.</p>
<p>My question is: what is the reason behind the "twisting" of the first component of the differential with $\varphi(f^{i+1})$? Shouldn't one obtain an honest complex even without that? It must be required by some interesting property of the cone itself.</p>
| Piotr Achinger | 3,847 | <p>Short answer: Otherwise it wouldn't depend on $\phi$!</p>
<p>Longer answer: Think about it this way: write $F$ and $G$ vertically side by side (in the 0-th and 1st column, respectively), with horizontal maps $\phi$. Since $\phi$ commutes with $d$, you get a double complex, call it $C$. The projection to $F$ and the inclusion of $G$ give a short exact sequence of double complexes $$0
\to G[-1] \to C\to F[0]\to 0.$$ Here $G[-1]$ means $G$ considered as a double complex in the 1st column, and similarly for $F[0]$. Now we take the total complexes. Recall that for a double complex $(C^{p,q}, d^v, d^h)$, ${\rm Tot}(C^{\bullet, \bullet})$ is the complex $K^n = \oplus_{p+q=n} C^{p,q}$, $d=d^h + (-1)^p d^v$. The sign twist is to make the squares in $C^{p, q}$ anticommute, so that $d^2=0$ in the total complex. After this operation, we get a short exact sequence of complexes $$ 0\to G[-1] \to K\to F\to 0 $$ where $K={\rm Tot}(C)$, and now $G[-1]$ means $G$ shifted by 1 to the right. Now you can check that </p>
<ol>
<li>$K= {\rm Cone}(\phi)[-1]$, </li>
<li>the boundary maps $\delta:H^i(F)\to H^{i+1}(G[-1])=H^i(G)$ in the long cohomology exact sequence equal $H^i(\phi)$. </li>
</ol>
<p>This explains why we have an exact triangle as desired.</p>
<p>I apologize for any potential sign errors.</p>
|
3,896,562 | <p>Suppose <span class="math-container">$f:\mathbb{S}^n\rightarrow Y$</span> is a continuous map null homotopic to a constant map <span class="math-container">$c$</span>. In other words: <span class="math-container">$F: f\simeq c$</span> , where <span class="math-container">$c(x)=y$</span></p>
<p>Now, we may extend <span class="math-container">$f$</span> to a continuous map <span class="math-container">$g: D^{n+1}\rightarrow Y$</span> by defining</p>
<p><span class="math-container">$g(x)= y$</span> if <span class="math-container">$0\leq ||x||\leq \frac{1}{2}$</span> and <span class="math-container">$F(\frac{x}{||x||},2-2||x||)$</span> if <span class="math-container">$\frac{1}{2}\leq ||x||\leq \frac{1}{2}$</span></p>
<p>Now, on <span class="math-container">$||x||=\frac{1}{2}$</span>, <span class="math-container">$F(\frac{x}{||x||},2-2||x||)=F(\frac{x}{||x||},1)=c=y$</span>.</p>
<p>Hence <span class="math-container">$F$</span> is continuous by the gluing lemma.</p>
<p>I was wondering as to what the intuition is for constructing such a function, what geometrical clues allow one to define such a function <span class="math-container">$F$</span></p>
| Especially Lime | 341,019 | <p>Suppose not. Then for every <span class="math-container">$\delta>0$</span> there is a point <span class="math-container">$x\in[a,b]$</span> where <span class="math-container">$g(x)-f(x)<\delta$</span>. In particular, there is a sequence <span class="math-container">$x_n\in[a,b]$</span> with <span class="math-container">$g(x_n)-f(x_n)<1/n$</span>.</p>
<p>The sequence <span class="math-container">$x_n$</span> doesn't necessarily have a limit, but by Bolzano-Weierstrass there is a convergent subsequence <span class="math-container">$x_{n_i}\to x\in[a,b]$</span>. Now <span class="math-container">$g(x_{n_i})-f(x_{n_i})\to 0$</span>, so <span class="math-container">$g(x)=f(x)$</span>.</p>
|
3,931,672 | <p>Is there any bounded continuous map f:A to R (A is open) which can not be extended on whole R?</p>
<p>This is a question posed by myself.
My attempt: Let A=(1,2) then we can extend it. If A is finitely many intervals it can be extended. If A is countable many intervals then it can also be extended.</p>
<p>But the last claim is based on A is countable union of disjoint open interval.</p>
<p>Am i right or wrong?</p>
| Bio | 568,617 | <p><span class="math-container">$\sin(1/x)$</span> is continuous and bounded on <span class="math-container">$(0,1)$</span>, but if you extend it, what is the limit at <span class="math-container">$x =0$</span>?</p>
|
1,563,205 | <p>I have 3 points: $A(0;0;0), B(0;0;1), C(2;2;1) $. They exist on the plane.
I assumed that scalar product of the normal vector and a line which exists on the same plane will be equal to 0. Scalar product equals to $x*2+y*2+z*1=0$ where $x,y,z$ are coordinates of the normal vector. Finally i can get $x,y,z$ using a selection method. For example $x,y,z$ can be $0,1,-1$. However i think it's a wrong assumption.</p>
| zickens | 258,840 | <p>As long as there are enough points to define a plane, there <strong>will</strong> exist a normal vector to the plane; as a matter of fact, a plane is defined by
$$
\vec{N}\cdot\left( \vec{a} + \vec{b} \right) = 0
$$
Where $\vec{a}$ and $\vec{b}$ are vectors that go from one point in space to another. We call the vector $\vec{N}$ that satisfies the above a <em>normal vector</em>.</p>
|
2,463,565 | <p>I want to use the fact that for a $(n \times n)$ nilpotent matrix $A$, we have that $A^n=0$, but we haven't yet introduced the minimal polynomials -if we had, I know how to prove this.</p>
<p>The definition for a nilpotent matrix is that there exists some $k\in \mathbb{N}$ such that $A^k=0$.</p>
<p>Any ideas?</p>
| Joppy | 431,940 | <p>Let $T: V \to V$ be any linear transformation. Then the following facts are true:</p>
<ol>
<li>For all $k \in \mathbb{N}$, $\operatorname{ker}(T^k) \subseteq \operatorname{ker}(T^{k+1})$.</li>
<li>If $\operatorname{ker}(T^k) = \operatorname{ker}(T^{k+1})$, then $\operatorname{ker}(T^k) = \operatorname{ker}(T^{k+m})$ for all $m \in \mathbb{N}$.</li>
</ol>
<p>From this, you should be able to see that the nilpotency degree is at most $n$.</p>
|
3,735,904 | <p><span class="math-container">$\mathbf{Question:}$</span> Prove that <span class="math-container">$(A\cap C)-B=(C-B)\cap A$</span></p>
<p><span class="math-container">$\mathbf{My\ attempt:}$</span></p>
<p>Looking at LHS, assuming <span class="math-container">$(A\cap C)-B \neq \emptyset$</span></p>
<p>Let <span class="math-container">$x\in (A\cap C)-B$</span></p>
<p>This implies <span class="math-container">$x\in A$</span> and <span class="math-container">$x\in C$</span> and <span class="math-container">$x\notin B$</span></p>
<p>Looking at RHS, assuming <span class="math-container">$(C-B)\cap A \neq \emptyset$</span>,</p>
<p>Let <span class="math-container">$y \in (C-B)\cap A$</span></p>
<p>This implies <span class="math-container">$y\in C$</span> and <span class="math-container">$y\notin B$</span> and <span class="math-container">$y\in A$</span></p>
<p>By comapring the LHS and RHS, we find that:
<span class="math-container">$$
x,y\in A
$$</span></p>
<p><span class="math-container">$$
x,y\in C
$$</span></p>
<p><span class="math-container">$$
x,y\notin B
$$</span></p>
<p>Thus LHS = RHS.</p>
<p>Is this correct?</p>
| Graham Kemp | 135,106 | <p>Aside from the typo, yes. In short.</p>
<p><span class="math-container">$$\begin{align}&(A\cap C)\smallsetminus C \\ =~&\{x:(x\in A\wedge x\in C)\wedge x\notin B\}\\=~&\{x:(x\in C\wedge x\notin B)\wedge x\in A\}\\=~&(C\smallsetminus B)\cap A\end{align}$$</span></p>
|
4,551,674 | <p>The question is:</p>
<blockquote>
<p><span class="math-container">$f: A\to R$</span> is a continuous, real-valued function, where <span class="math-container">$A\subseteq\mathbb{R}^n$</span>.</p>
<p>If <span class="math-container">$f(x)\to\infty$</span> as <span class="math-container">$\|x\|\to\infty,$</span> show that <span class="math-container">$f$</span> attains a minimum.</p>
</blockquote>
<p>Where I’ve gotten so far is I’ve written down the definition of this limit, and that tells that for all <span class="math-container">$M > 0$</span>, there exists some <span class="math-container">$L > 0$</span> such that if <span class="math-container">$\|x\| > L$</span>, then <span class="math-container">$f(x) > M$</span>.</p>
<p>I can kind of see that this means that I need to take some <span class="math-container">$[-L, L]\subseteq A$</span> and use E.V.T. for compact sets here, and prove that <span class="math-container">$f(x)$</span> needs to be larger in <span class="math-container">$[-L, L]^c$</span>, but I’m not really sure how to actually do any of that. Any help would be greatly appreciated.</p>
| Drew Brady | 503,984 | <p>If <span class="math-container">$f$</span> is continuous on <span class="math-container">$\mathbb{R}^n$</span> then the claim is true.</p>
<p>It suffices to show that there exists a compact set <span class="math-container">$K$</span> such that
<span class="math-container">$$\inf_K f = \inf_{\mathbb{R}^n} f$$</span>.</p>
<p>Indeed, then one can apply the extreme value theorem on <span class="math-container">$K$</span>.</p>
<p>However, if there does not exist such <span class="math-container">$K$</span> for the above statement, it means that one can find a sequence <span class="math-container">$x_n$</span> with <span class="math-container">$\|x_n\|_ \to \infty$</span> but <span class="math-container">$f(x_n) \not \to \infty$</span>. (For instance consider the ball of radius <span class="math-container">$r_n \to \infty$</span>.)</p>
|
3,933,851 | <p>Suppose <span class="math-container">$N$</span> is called a magic number if it is a positive integer and when you stick <span class="math-container">$N$</span> on the end of any positive integer, the resulting integer is divisible by <span class="math-container">$N.$</span> How many magic numbers are there less than <span class="math-container">$2100?$</span></p>
| lonza leggiera | 632,373 | <p><strong>Hint:</strong></p>
<p>If <span class="math-container">$\ N\,\big|\,10^rM+N\ $</span> for <em>any</em> positive integer <span class="math-container">$\ M\ $</span>, where <span class="math-container">$\ 10^r\ $</span> is the smallest power of <span class="math-container">$\ 10\ $</span> exceeding <span class="math-container">$\ N\ $</span>, what does that tell you about <span class="math-container">$\ N\ $</span>?</p>
|
4,069,120 | <p>I am confused with the definition of 'basis'. <br/>
A basis <span class="math-container">$\beta$</span> for a vector space <span class="math-container">$V$</span> is a linearly independent subset of <span class="math-container">$V$</span> that generates <span class="math-container">$V$</span>. And span(<span class="math-container">$\beta$</span>) is the set consisting of all linear combinations of the vectors in <span class="math-container">$\beta$</span>. <br/>So from my understanding, just because all vectors in <span class="math-container">$V$</span> can be generated by <span class="math-container">$\beta$</span> doesn't necessarily mean that V=span(<span class="math-container">$\beta$</span>), since there might exist <span class="math-container">$b\in span(\beta)$</span> s.t. <span class="math-container">$b\notin V$</span> <br/>But I have learned that if <span class="math-container">$W\leq V$</span> and <span class="math-container">$\beta$</span> is a basis for both <span class="math-container">$V, W$</span>, then <span class="math-container">$V=W$</span> since <span class="math-container">$W=V=span(\beta)$</span>. (<span class="math-container">$V, W$</span> are finite dimensional vector space)<br/> This seems to imply that a vector space is equal to the span of its basis, which contradicts my understanding of its definition. <br/> I'd like to know which part of my understanding above is flawed.</p>
| esoteric-elliptic | 425,395 | <p>Yes, it is true that <span class="math-container">$$V = \text{span}\ \beta$$</span>
To address your concern, suppose <span class="math-container">$\beta = \{v_1,v_2,\ldots,v_n\}$</span>. If <span class="math-container">$b\in\text{span}\ \beta$</span>, then <span class="math-container">$b = \alpha_1v_1 + \ldots + \alpha_nv_n$</span> for scalars <span class="math-container">$\alpha_1,\alpha_2,\ldots,\alpha_n$</span>. <strong>Since <span class="math-container">$V$</span> is a vector space</strong>, and <span class="math-container">$v_1,v_2,\ldots,v_n \in V$</span>, <span class="math-container">$\alpha_1v_1 + \ldots + \alpha_nv_n\in V$</span>. This is from vector space axioms!</p>
<p>Not only is a vector space a span of its basis, but the basis of a vector space is also oftentimes defined as <strong>a</strong> <strong>minimal spanning set!</strong> In other words, the basis of a vector space (finite-dimensional) is a set of <em>minimal/least possible size</em>, such that the span of vectors in this set is exactly the entire space <span class="math-container">$V$</span>.</p>
|
101,526 | <p>Is there a notion of <i>"smooth bundle of Hilbert spaces"</i> (the base is a smooth finite dimensional manifold, and the fibers are Hilbert spaces) such that:</p>
<blockquote>
<p><b>1•</b> A smooth bundle of Hilbert spaces over a point is the same thing as a Hilbert space.</p>
<p><b>2•</b> If <span class="math-container">$E\to M$</span> is a smooth fiber bundle of orientable manifolds (say with compact fibers) equipped with a vertical volume form, then taking fiberwise <span class="math-container">$L^2$</span>-functions produces a smooth bundle of Hilbert spaces over <span class="math-container">$M$</span>.</p>
<p><b>3•</b> If the Hilbert space is finite dimensional, then this specializes to the usual notion of smooth vector bundle (with fiberwise inner product).</p>
</blockquote>
<p>I suspect that the answer is "no", because I couldn't figure out how it might work...<br>
If the answer is indeed no, then what is/are the best notion/s of smooth bundle of Hilbert spaces?</p>
| Peter Michor | 26,935 | <p>The answer is yes: </p>
<p>Let me sketch the proof. So $p:E\to M$ is the fiber bundle with typical fiber $F$ which is compact, connected (and oriented, for simplicity's sake), and you are given a vertical volume form $\mu$; so $\mu_x$ is a volume form on each fiber $E_x$ which depends smoothly on $x\in M$. First I choose another vertical volume form $\nu$ such that the volume of each fiber is 1, $\int_{E_x} \nu_x=1$. Take $\nu_x = \frac{\mu_x}{\int_{E_x}\mu_x}$, for example. </p>
<p>Now I construct the Hilbert bundle with fibers $L^2(E_{x},\nu_{x})$:
Fix a Riemannian metric $g$ on $F$ with $\int_F vol(g)=1$.
Let $U\subset M$ be open so that $\phi:U\times F \to E|U$ is a fiber respecting diffeomorphism.
For each $x\in M$ the Moser trick gives us a diffeomorphism $\psi_x:F\to F$ depending smoothly on $x\in U$ with $(\psi_x)^*(\phi_x)^*\nu_x = vol(g)$. This uses the Green function of the Hodge decomposition with respect to $g$ to choose a $(\dim(F)-1)$-form $\alpha_x$ with $d\alpha_x = \phi_x^*\nu_x-vol(g)$ which depends still smoothly on $x\in U$. </p>
<p>Edit: 43.7 in the book cited below contains Moser's trick in the form I just described.</p>
<p>Then the mapping $\bigsqcup_{x\in U}(x, L^2(E_{x},\nu_{x}))\ni (x,f) \mapsto (x,f\circ \phi_x \circ \psi_x^{-1})\in U\times L^2(F,vol(g))$
is an isometric trivialisation of the bundle
$\bigsqcup_{x\in M}(x, L^2(E_{x},\nu_{x}))$ over $U$.</p>
<p>Edit (more details):
The change of trivialisation is then of a similar form, $(x,f)\mapsto (x,f\circ \rho_x)$
for smooth $\rho:U\times F\to F$ such that $\rho_x$ is a $vol(g)$-preserving diffeomorphism for each $x\in U$.
That it is smooth $U\times L^2(F, vol(g)) \to U\times L^2(F,vol(g))$ is seen as follows:
It suffices to show that $(x,f)\mapsto \langle f\circ \rho_x, \lambda\rangle_{L^2}$ is smooth for all $\lambda$ in a subset $\subset L^2$ of linear functionals which together recognize bounded sets.
We may take $C^\infty(F)\subset L^2(F,vol(g))$ as this set. By one of the two smooth uniform boundedness theorems from the book below it suffices to show that for each fixed $f\in L^2$ the function $F\to \mathbb R$ given by
$$x\mapsto \langle f\circ \rho_x, \lambda\rangle_{L^2} = \int_F f(\rho_x(u))\lambda(u)\,vol(g)(u)= \int f(v) \lambda(\rho_x^{-1}(v) ((\rho_x^{-1})^*vol(g))(v)$$
is smooth.
But this now obvious since $\lambda$ and $vol(g)$ are smooth. </p>
<p>The original inner product $\int_{E_x} f \mu_x$ is now a fiberwise Riemann metric on this Hilbert bundle. </p>
<p>I use calculus in infinite dimensions from:
Andreas Kriegl, Peter W. Michor: The Convenient Setting of Global Analysis. Mathematical Surveys and Monographs, Volume: 53, American Mathematical Society, Providence, 1997,
<a href="http://www.mat.univie.ac.at/~michor/apbookh-ams.pdf" rel="noreferrer">(pdf)</a>.</p>
<h1>Edit:</h1>
<p>As TaQ noted in his answer, my proof above is wrong. In fact, the answer is no, if you accept that the construction which I tried is the natural one. Namely, in the realm of Sobolev spaces, if $k>\frac{\dim(F)}2$, for the composition mapping $H^{k+l}(F,\mathbb R) \times H^k(F,F) \to H^k(F,\mathbb R)$, left translations are $C^l$ and right translations are smooth; i.e., composition is $C^l$ in the right hand side variable, and is smooth in the left hand side variable. This is folklore; for a detailed proof see </p>
<ul>
<li>H. Inci,T. Kappeler and P. Topalov, On the Regularity of the Composition of Diffeomorphisms, Memoirs of the American Mathematical Society, vol. 226 (American Mathematical Society, 2013). </li>
</ul>
<p>In the case above we have left translations, and no assumption for to be above the Sobolev threshold. </p>
<p>But if one asks for Sobolev spaces instead of $L^2$, one gets a $C^{k}$ vector bundle for $H^{m}$ with $m> k + \frac{\dim(F)}2$.</p>
|
3,414,208 | <blockquote>
<p>In the beginning A=0. Every time you toss a coin, if you get head, you increase A by 1, otherwise decrease A by 1. Once you tossed the coin 7 times or A=3, you stop. How many different sequences of coin tosses are there?</p>
</blockquote>
<p>The tricky part of this problem is the combination of the requirements, so it seems that recursion could be useful. If <span class="math-container">$P_n$</span> is the number of ways before A=3 and n flips, I'm not sure on the recurrence. Of course, this problem could also be solved with a tree, but I'm looking for a cleaner solution.</p>
| Parcly Taxel | 357,390 | <p>The sequences where <span class="math-container">$A=3$</span> is reached early can only be of length <span class="math-container">$3$</span> or <span class="math-container">$5$</span>, since <span class="math-container">$A$</span> changes parity at every flip. It is easy to list out these early stops:
<span class="math-container">$$HHH\qquad HHTHH\qquad HTHHH\qquad THHHH$$</span>
The <span class="math-container">$HHH$</span> prefix removes <span class="math-container">$2^4-1=15$</span> sequences from the <span class="math-container">$2^7=128$</span> <span class="math-container">$7$</span>-flip sequences, and each of the length-<span class="math-container">$5$</span> prefixes removes <span class="math-container">$2^2-1=3$</span> sequences. Thus there are <span class="math-container">$128-15-3×7=92$</span> admissible sequences.</p>
|
1,376,659 | <p>Let $5=\frac ab$
$\forall\ a,b\ \epsilon\ N$. And $(a,b)=1$ <Br>
Squaring both sides, <Br>
$25b^2=a^2$ <Br>
Thus, $25|a^2$; $25|a$ <Br>
So $a=25m$ <Br>
Substituting, $25b^2=25^2m^2$ <Br>
So $b^2=25m^2$ <Br>
So $25|b$ (By the same logic used before). <Br>
But are assumption is proved to be wrong, because $25$ comes to be the common factor. So contradiction, proving that $5$ is not rational. So how is it possible?</p>
| MJD | 25,554 | <p>The answer is no; John can't even fill up the topmost $7\times 11\times 1$ slice of the $7\times 11\times 9$ box. Consider just the top $7\times 11$ face of this box; look just at this face and ignore the rest of the box. A solution to the problem would fill up this $7\times 11$ rectangle with large $3\times3$ rectangles and small $3\times 1$ rectangles. But $7\times 11$ is not a multiple of $3$.</p>
|
2,936,269 | <p>How do you simplify: <span class="math-container">$$\sqrt{9-6\sqrt{2}}$$</span></p>
<p>A classmate of mine changed it to <span class="math-container">$$\sqrt{9-6\sqrt{2}}=\sqrt{a^2-2ab+b^2}$$</span> but I'm not sure how that helps or why it helps.</p>
<p>This questions probably too easy to be on the Math Stack Exchange but I'm not sure where else to post it.</p>
| user587054 | 587,054 | <p>Try to use the formula your classmate gave. In this situation, <span class="math-container">$$9-6\sqrt2={\sqrt3}^2-2{\sqrt{3\times6}}+{\sqrt6}^2\Rightarrow(1)$$</span> That is because <span class="math-container">$6{\sqrt2}=2{\sqrt{3\times6}}$</span> Expression (1) now looks similar to <span class="math-container">$a^2-2ab+b^2$</span> where <span class="math-container">$a=\sqrt3$</span> and <span class="math-container">$b=\sqrt6$</span> Using this we can conclude that <span class="math-container">$$9-6\sqrt2={\sqrt3}^2-2{\sqrt{3\times6}}+{\sqrt6}^2=(\sqrt3-\sqrt6)^2$$</span> We can subsitute in the original expression <span class="math-container">$$\sqrt{9-6\sqrt2}=\sqrt{(\sqrt3-\sqrt6)^2}=-(\sqrt3-\sqrt6)=\sqrt6-\sqrt3$$</span> The simplest form will be <span class="math-container">$\sqrt6-\sqrt3$</span></p>
|
126,983 | <p>I am working on an integral on the following trigonometric functions</p>
<p>$$\int_{-\pi}^\pi \frac{\cos[(4m+2)x] \cos[(4m+1)x]}{\cos x}dx$$</p>
<p>where $m$ is positive integer. I am running the following code in mathematica </p>
<pre><code>Assuming[Element[m, Integers] && (m > 0),
Integrate[
Cos[(4 m + 2) x] Cos[(4 m + 1) x]*1/Cos[x], {x, -π, π}]]
</code></pre>
<p>It gives me result of $\pi$. Since $m$ could be any positive number, I expect the integral should be $\pi$ if I replace $m$ with an integer before the integral. However, I got zero if I replace $m$ with number first. </p>
| Dr. Wolfgang Hintze | 16,361 | <p><strong>Solution #2: proof</strong></p>
<p>Here's the missing proof that the integral</p>
<p>$$f(\text{m})=\int_{-\pi }^{\pi } \sec (x) \cos ((4 m+1) x) \cos ((4 m+2) x) \, dx$$</p>
<p>is zero for m = 0, 1, 2, ...</p>
<p>Following the idea of Yarchik in <a href="https://mathematica.stackexchange.com/questions/126984/how-to-compute-the-integral-of-trigonometic-function-with-multiple-angle/126995#126995">How to compute the integral of trigonometic function with multiple angle</a>
we observe that the integrand can be represented as the finite sum</p>
<pre><code>s = -Sum[(-1)^n Cos[2 n (x)], {n, 1, 4 m + 1}];
</code></pre>
<p>Indeed</p>
<pre><code>FullSimplify[s == Cos[x (4 m + 1)] Cos[x (4 m + 2)] Sec[x], m ∈ Integers]
(* True *)
</code></pre>
<p>Now the integral over <code>s</code> can be done for each summand separately which is</p>
<pre><code>Integrate[Cos[2 n x], {x, -π, π}]
Simplify[%, {n ∈ Integers, n > 0}]
(* Out[288]= Sin[2 n π]/n *)
(* Out[289]= 0 *)
</code></pre>
<p>Now a finite sum of zeroes is zero. Hence the integral over s is zero and so is the original integral. QED.</p>
<p><em>Remark</em> </p>
<p>Examples of finite sums over trigonometric functions of the type discussed here can be found in Gradshteyn/Ryshik 1.341</p>
|
126,983 | <p>I am working on an integral on the following trigonometric functions</p>
<p>$$\int_{-\pi}^\pi \frac{\cos[(4m+2)x] \cos[(4m+1)x]}{\cos x}dx$$</p>
<p>where $m$ is positive integer. I am running the following code in mathematica </p>
<pre><code>Assuming[Element[m, Integers] && (m > 0),
Integrate[
Cos[(4 m + 2) x] Cos[(4 m + 1) x]*1/Cos[x], {x, -π, π}]]
</code></pre>
<p>It gives me result of $\pi$. Since $m$ could be any positive number, I expect the integral should be $\pi$ if I replace $m$ with an integer before the integral. However, I got zero if I replace $m$ with number first. </p>
| mikado | 36,788 | <p>When I execute the code, I get an error. Have we found some sort of bug?</p>
<pre><code>$Version
Assuming[Element[m, Integers] && (m > 0),
Integrate[
Cos[(4 m + 2) x] Cos[(4 m + 1) x]*1/Cos[x], {x, -π, π}]]
(* "11.0.0 for Linux x86 (64-bit) (July 28, 2016)" *)
</code></pre>
<blockquote>
<pre><code>(* Throw::sysexc: Uncaught SystemException returned to top level. Can be caught with Catch[…, _SystemException]. *)
</code></pre>
</blockquote>
<pre><code>(* SystemException["MemoryAllocationFailure"] *)
</code></pre>
|
3,671,608 | <p>Find the number of ways to distribute <span class="math-container">$7$</span> red balls, <span class="math-container">$8$</span> blue ones and <span class="math-container">$9$</span> green ones to two people so that each person gets <span class="math-container">$12$</span> balls. The balls of one color are indistinguishable.</p>
<p>My approach: is to partition the balls among these two people in <span class="math-container">$\binom{24}{12,12}$</span> ways, and then divide by <span class="math-container">$2!$</span>. Unfortunately it's wrong, could you please give me any help?</p>
| h-squared | 728,189 | <p>Without any restrictions,</p>
<p><span class="math-container">$$r+b+g=12$$</span></p>
<p>The number of ways to distribute balls are <span class="math-container">$$\binom{4}{2}$$</span></p>
<p>But we have counted ways in which <span class="math-container">$g\gt 9$</span>.</p>
<p>Fix <span class="math-container">$9$</span> green balls.
<span class="math-container">$$r+b+G=3$$</span></p>
<p>The number of ways to do this are
<span class="math-container">$$\binom{5}{2}$$</span></p>
<p>Similarly, in the beginning we counted ways in which <span class="math-container">$r\gt 7$</span> and <span class="math-container">$b\gt 8$</span>.</p>
<p><span class="math-container">$$R+b+g=5$$</span></p>
<p><span class="math-container">$$\binom{7}{2}$$</span></p>
<p><span class="math-container">$$r+B+g=4$$</span></p>
<p><span class="math-container">$$\binom{6}{2}$$</span></p>
<p>Also, we dont have to worry about ways in which any <span class="math-container">$2$</span> or all <span class="math-container">$3$</span> type of balls are greater than <span class="math-container">$7,8,9$</span>.</p>
<p>Finally the answer is</p>
<p><span class="math-container">$$\binom{14}{2}-\binom{5}{2} -\binom{6}{2} - \binom{7}{2}$$</span></p>
|
4,105,812 | <p>could someone help me check if my proof is valid?</p>
<p>Use direct proof to prove the following theorem: <span class="math-container">$$ A \lor (B \rightarrow A), B \vdash_R A $$</span></p>
<p>We aren't allowed to use proof by resolution, we can only use logic axioms and inference rules such as hypothetical and disjunctive syllogism, constructive and destructive dilemma, modus ponens and modus tolens. Also, we can use similar equivalencies like contraposition <span class="math-container">$(A \Rightarrow B) \Leftrightarrow (\lnot B \Rightarrow \lnot A)$</span></p>
<p>Here is my proof:</p>
<ol>
<li><span class="math-container">$A \lor(B \rightarrow A)$</span>, premise</li>
<li><span class="math-container">$B$</span>, premise</li>
</ol>
<ol start="3">
<li><span class="math-container">$\lnot A$</span>, (assumption)</li>
<li><span class="math-container">$\lnot A \rightarrow(B \rightarrow A)$</span>, elimination of disjunction from (1)</li>
<li><span class="math-container">$B\rightarrow A $</span>, (modus ponens from (3), (4))</li>
<li><span class="math-container">$A$</span>, (modus ponens from (2), (5))</li>
</ol>
<p>The reason I'm asking is because I'm not sure if it is valid, since I made an assumption that A is incorrect and used it as a premise until I got to a contradiction at <span class="math-container">$6.$</span></p>
<p>Since I have used an incorrect assumption as a premise, should I start anew but using the assumption that A is true, albeit me getting a contradiction?</p>
| user577215664 | 475,762 | <p><span class="math-container">$$5x^3(y')^2+5x^2yy'-3=0$$</span>
<span class="math-container">$$xy' + y - \dfrac{3}{5x^2y'} = 0$$</span>
Change the variable <span class="math-container">$u=1/x$</span>
<span class="math-container">$$y'=\dfrac {dy}{d1/x}\dfrac{d1/x}{dx}=-x^{-2}\dfrac {dy}{d1/x}$$</span>
<span class="math-container">$$y'=-u^2\dfrac {dy}{du}$$</span>
The ODE becomes:
<span class="math-container">$$y=uy' - \dfrac{3}{5y'} $$</span>
<span class="math-container">$$y=uy'+f(y')$$</span>
This is <a href="https://en.wikipedia.org/wiki/Clairaut%27s_equation" rel="nofollow noreferrer">Clairaut's differential equation .</a></p>
|
2,648,549 | <p>Let $\tau_{ij}$ be a transposition if degree n. What does it mean when one says that $\tau_{ij}=\tau_{ji}$? Thanks in advance!</p>
| D F | 501,035 | <p>We know that the product of all eigenvalues is equal to the $\det$ and the sum of eigenvalues is equal to the trace of a matrix. Hence $\lambda_1 \lambda_2 = 1 - p - q$ and $\lambda_1+\lambda_2 = 2 -p - q$. I think you are able to continue</p>
|
2,648,549 | <p>Let $\tau_{ij}$ be a transposition if degree n. What does it mean when one says that $\tau_{ij}=\tau_{ji}$? Thanks in advance!</p>
| amd | 265,466 | <p>A stochastic matrix always has $1$ as an eigenvalue. If it’s row-stochastic, all of its rows sum to $1$, but summing rows is equivalent to right-multiplying by the column vector that consists entirely of $1$s, hence it’s a right eigenvector with eigenvalue $1$. (If the matrix is column-stochastic, you can apply the same argument to its transpose, and use the fact that transposes have identical eigenvalues.) </p>
<p>For any matrix, once you know all but one eigenvalue (counting multiplicities), you get the last one “for free” because the sum of the eigenvalues is equal to the trace of the matrix. So, for this matrix, you know that the second eigenvalue must be $(1-p)+(1-q)-1 = 1-p-q.$</p>
|
52,874 | <p>Consider a coprime pair of integers $a, b.$ As we all know ("Bezout's theorem") there is a pair of integers $c, d$ such that $ac + bd=1.$ Consider the smallest (in the sense of Euclidean norm) such pair $c_0, d_0$, and consider the ratio $\frac{\|(c_0, d_0)\|}{\|(a, b)\|}.$ The question is: what is the statistics of this ratio as $(a, b)$ ranges over all <em>visible</em> pairs in, for example, the square $1\leq a \leq N, 1 \leq b \leq N?$</p>
<p>Experiment shows the following amazing histogram:<img src="https://dl.dropbox.com/u/5188175/histogram.jpg" alt="alt text"></p>
<p><strong>EDIT</strong> by popular demand: the histogram is for an experiment for $N=1000.$ The $x$ axis is the ratio, the $y$ axis is the number of points in the bin. The total number of points is $1000000/\zeta(2),$ so there are $100$ bins each with around $6000$ points.</p>
<p>But no immediate proof leaps to mind.</p>
| Aaron Meyerowitz | 8,008 | <p>This is more a few comments than an answer (since the question seems well answered). I assume that the pair $(a,b)=(1,1)$ was discarded, it would give a value $\frac{\sqrt{2}}{2}$ outside the range of the rest. </p>
<p>Taking instead the $10^6$ points in a quarter disk of radius 1128 gives almost the same bin sizes (maybe not surprising since there is a good overlap). This includes the features that the bin 0-0.005 is smaller than the rest and that 0.330-0.335 is rather deficient and then 0.335-0.34 is higher then average.</p>
<p>This is an indication of a slight repulsion from simple fractions. I repeated the experiment using 2520 bins and using rounding. I also used $0 \le a,b \le 5000$ giving about the same expected number of points per bin: 6030 or in my case 3015 since I only used $a<b$ (the situation being symmetric.)</p>
<p>The least filled bins were $\tiny{[0,0],[1/3,2187],[1/4,2479],[1/6,2761],[2/5,2773],[1/5,2774],[275/1008,2865],[229/1008,2875],[323/840,2891],[229/1260,2893],[1/8,2895]}$ $\tiny{[3/8,2897],[1/7,2897],[3/7,2900],[2/7,2902],[97/840,2906],[155/1008,2910],[1/10,2913],[3/10,2915],[37/630,2925],[59/560,2927],[139/315,2928]}$ $\tiny{[221/560,2936],[127/720,2939],[1/9,2939],[4/9,2940],[199/630,2940],[2/9,2940],[877/5040,2947],[611/1680,2948],[1/315,2948],[157/315,2951]}$ $\tiny{[31/360,2952],[229/2520,2956],[97/1260,2956],[5/14,2959],[1643/5040,2960],[3/14,2960],[1/14,2962]}$</p>
<p>Here 275/1008 is nearly 3/11 and 229/1008 nearly 5/22.</p>
<p>The most filled bins were</p>
<p>$\tiny{[277/720, 3123], [83/720, 3124], [229/840, 3139], [1259/2520, 3146], [191/840, 3151], [1/1680, 3212], [1259/5040, 3227], [1261/5040, 3229]}$ ${\tiny [1/5040, 3281], [1681/5040, 3316], [1679/5040, 3316], [1/2520, 3656], [2519/5040, 4141]}$</p>
<p>The final 5 are the bins adjacent to 0,1/2 and 1/3 (a bin for 1/2 would also be empty)</p>
|
18,659 | <p>I'm looking for a fast algorithm for generating all the partitions of an integer up to a certain maximum length; ideally, I don't want to have to generate <em>all</em> of them and then discard the ones that are too long, as this will take around 5 times longer in my case.</p>
<p>Specifically, given <span class="math-container">$L = N(N+1)$</span>, I need to generate all the partitions of <span class="math-container">$L$</span> that have at most <span class="math-container">$N$</span> parts. I can't seem to find any algorithms that'll do this directly; all I've found that seems relevant is <a href="https://doi.org/10.1007/BF02241987" rel="nofollow noreferrer">this</a> paper, which I unfortunately can't seem to access via my institution's subscription. It <a href="https://web.archive.org/web/20141013222856/http://www.site.uottawa.ca:80/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">apparently</a><sup>1</sup> documents an algorithm that generates the partitions of each individual length, which could presumably be easily adapted to my needs.</p>
<p>Does anyone know of any such algorithms?</p>
<p><sup>1</sup><em>Zoghbi, Antoine; Stojmenović, Ivan</em>, <a href="https://dx.doi.org/10.1080/00207169808804755" rel="nofollow noreferrer"><strong>Fast algorithms for generating integer partitions</strong></a>, Int. J. Comput. Math. 70, No. 2, 319-332 (1998). <a href="https://zbmath.org/?q=an:0918.68040" rel="nofollow noreferrer">ZBL0918.68040</a>, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1712501" rel="nofollow noreferrer">MR1712501</a>. <a href="https://web.archive.org/web/20141013222856/https://www.site.uottawa.ca/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">Wayback Machine</a></p>
| Peter Taylor | 5,676 | <p>You can do it recursively. Let $f(n, maxcount, maxval)$ return the list of partitions of $n$ containing no more than $maxcount$ parts and in which each part is no more than $maxval$.</p>
<p>If $n = 0$ you return a single list containing the empty partition.</p>
<p>If $n > maxcount * maxval$ you return the empty list.</p>
<p>If $n = maxcount * maxval$ you return a single list consisting of the obvious solution.</p>
<p>Otherwise you make a series of recursive calls to $f(n - x, maxcount - 1, x)$.</p>
|
18,659 | <p>I'm looking for a fast algorithm for generating all the partitions of an integer up to a certain maximum length; ideally, I don't want to have to generate <em>all</em> of them and then discard the ones that are too long, as this will take around 5 times longer in my case.</p>
<p>Specifically, given <span class="math-container">$L = N(N+1)$</span>, I need to generate all the partitions of <span class="math-container">$L$</span> that have at most <span class="math-container">$N$</span> parts. I can't seem to find any algorithms that'll do this directly; all I've found that seems relevant is <a href="https://doi.org/10.1007/BF02241987" rel="nofollow noreferrer">this</a> paper, which I unfortunately can't seem to access via my institution's subscription. It <a href="https://web.archive.org/web/20141013222856/http://www.site.uottawa.ca:80/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">apparently</a><sup>1</sup> documents an algorithm that generates the partitions of each individual length, which could presumably be easily adapted to my needs.</p>
<p>Does anyone know of any such algorithms?</p>
<p><sup>1</sup><em>Zoghbi, Antoine; Stojmenović, Ivan</em>, <a href="https://dx.doi.org/10.1080/00207169808804755" rel="nofollow noreferrer"><strong>Fast algorithms for generating integer partitions</strong></a>, Int. J. Comput. Math. 70, No. 2, 319-332 (1998). <a href="https://zbmath.org/?q=an:0918.68040" rel="nofollow noreferrer">ZBL0918.68040</a>, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1712501" rel="nofollow noreferrer">MR1712501</a>. <a href="https://web.archive.org/web/20141013222856/https://www.site.uottawa.ca/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">Wayback Machine</a></p>
| Joseph Malkevitch | 1,369 | <p>This article about Gray codes includes partitions. The idea behind a Gray code is to enumerate a cyclic sequence of some combinatorial collection of objects so that the "distance" between consecutive items in the list are "close." <a href="http://linkinghub.elsevier.com/retrieve/pii/0196677489900072" rel="nofollow">http://linkinghub.elsevier.com/retrieve/pii/0196677489900072</a>
Savage also has other survey articles about Gray codes that include partitions. <a href="http://reference.kfupm.edu.sa/content/s/u/a_survey_of_combinatorial_gray_codes__213043.pdf" rel="nofollow">http://reference.kfupm.edu.sa/content/s/u/a_survey_of_combinatorial_gray_codes__213043.pdf</a></p>
|
18,659 | <p>I'm looking for a fast algorithm for generating all the partitions of an integer up to a certain maximum length; ideally, I don't want to have to generate <em>all</em> of them and then discard the ones that are too long, as this will take around 5 times longer in my case.</p>
<p>Specifically, given <span class="math-container">$L = N(N+1)$</span>, I need to generate all the partitions of <span class="math-container">$L$</span> that have at most <span class="math-container">$N$</span> parts. I can't seem to find any algorithms that'll do this directly; all I've found that seems relevant is <a href="https://doi.org/10.1007/BF02241987" rel="nofollow noreferrer">this</a> paper, which I unfortunately can't seem to access via my institution's subscription. It <a href="https://web.archive.org/web/20141013222856/http://www.site.uottawa.ca:80/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">apparently</a><sup>1</sup> documents an algorithm that generates the partitions of each individual length, which could presumably be easily adapted to my needs.</p>
<p>Does anyone know of any such algorithms?</p>
<p><sup>1</sup><em>Zoghbi, Antoine; Stojmenović, Ivan</em>, <a href="https://dx.doi.org/10.1080/00207169808804755" rel="nofollow noreferrer"><strong>Fast algorithms for generating integer partitions</strong></a>, Int. J. Comput. Math. 70, No. 2, 319-332 (1998). <a href="https://zbmath.org/?q=an:0918.68040" rel="nofollow noreferrer">ZBL0918.68040</a>, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1712501" rel="nofollow noreferrer">MR1712501</a>. <a href="https://web.archive.org/web/20141013222856/https://www.site.uottawa.ca/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">Wayback Machine</a></p>
| Community | -1 | <p>This can be done with a very simple modification to the ruleAsc algorithm at <a href="http://jeromekelleher.net/category/combinatorics.html" rel="nofollow">http://jeromekelleher.net/category/combinatorics.html</a></p>
<pre><code> def ruleAscLen(n, l):
a = [0 for i in range(n + 1)]
k = 1
a[0] = 0
a[1] = n
while k != 0:
x = a[k - 1] + 1
y = a[k] - 1
k -= 1
while x <= y and k < l - 1:
a[k] = x
y -= x
k += 1
a[k] = x + y
yield a[:k + 1]
</code></pre>
<p>This generates all partitions of n into at most l parts (changing your notation around a bit). The algorithm is constant amortised time, so the time spent per partition is constant, on average. </p>
|
18,659 | <p>I'm looking for a fast algorithm for generating all the partitions of an integer up to a certain maximum length; ideally, I don't want to have to generate <em>all</em> of them and then discard the ones that are too long, as this will take around 5 times longer in my case.</p>
<p>Specifically, given <span class="math-container">$L = N(N+1)$</span>, I need to generate all the partitions of <span class="math-container">$L$</span> that have at most <span class="math-container">$N$</span> parts. I can't seem to find any algorithms that'll do this directly; all I've found that seems relevant is <a href="https://doi.org/10.1007/BF02241987" rel="nofollow noreferrer">this</a> paper, which I unfortunately can't seem to access via my institution's subscription. It <a href="https://web.archive.org/web/20141013222856/http://www.site.uottawa.ca:80/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">apparently</a><sup>1</sup> documents an algorithm that generates the partitions of each individual length, which could presumably be easily adapted to my needs.</p>
<p>Does anyone know of any such algorithms?</p>
<p><sup>1</sup><em>Zoghbi, Antoine; Stojmenović, Ivan</em>, <a href="https://dx.doi.org/10.1080/00207169808804755" rel="nofollow noreferrer"><strong>Fast algorithms for generating integer partitions</strong></a>, Int. J. Comput. Math. 70, No. 2, 319-332 (1998). <a href="https://zbmath.org/?q=an:0918.68040" rel="nofollow noreferrer">ZBL0918.68040</a>, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1712501" rel="nofollow noreferrer">MR1712501</a>. <a href="https://web.archive.org/web/20141013222856/https://www.site.uottawa.ca/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">Wayback Machine</a></p>
| Fred Schoen | 428,361 | <p>I was looking for an algorithm that generates all partitions of <span class="math-container">$L$</span> into <span class="math-container">$N$</span> parts in the multiplicity representation, Knuth calls it the "part-count form". I only found algorithm Z from A. Zoghbi's 1993 thesis <a href="http://dx.doi.org/10.20381/ruor-11312" rel="nofollow noreferrer">http://dx.doi.org/10.20381/ruor-11312</a> to output partitions in this form but it generates all partitions of <span class="math-container">$L$</span>. I coded it up as <code>partition(L)</code> in C++ and added a slight modification to only generate partitions into <span class="math-container">$N$</span> parts, <code>partition(L, N)</code>. I put the code with both functions on github as a <a href="https://gist.github.com/fredRos/1be056502742dba0753828e7852f9986" rel="nofollow noreferrer">gist</a>. Both have the same update to go from one partition to the next and just need a different initialization.</p>
<p>Zogbi claims that the multiplicity form is faster, his algorithm Z is just a transform of algorithm H mentioned in <a href="http://cs.utsa.edu/%7Ewagner/knuth/fasc3b.pdf" rel="nofollow noreferrer">Knuth's TAOCP 4</a> to partition <span class="math-container">$L$</span> into <span class="math-container">$N$</span> parts in standard representation but Z was at least 2x faster than H in their tests, albeit on 1993 hardware :)</p>
|
1,567,152 | <blockquote>
<p>Theorem: $X$ is a finite Hausdorff. Show that the topology is discrete.</p>
</blockquote>
<p>My attempt: $X$ is Hausdorff then $T_2 \implies T_1$ Thus for any $x \in X$ we have $\{x\}$ is closed. Thus $X \setminus \{x\}$ is open. Now for any $y\in X \setminus \{x\}$ and $x$ using Hausdorff property, we get $\{x\}$ is open.
Am I right till here? And how to proceed further? </p>
| nlmath | 876,389 | <p>Let <span class="math-container">$(X,\tau)$</span> be a finite topological space. Let <span class="math-container">$x\in X$</span>. If the singleton <span class="math-container">$\{x\}$</span> is not an open set, then <span class="math-container">$(X,\tau)$</span> cannot be Hausdorff. This is shown as follows. Let <span class="math-container">$B$</span> be the intersection of all open subsets of <span class="math-container">$X$</span> that include <span class="math-container">$x$</span>. Since <span class="math-container">$X$</span> is finite, there are a finite number of such subsets, therefore <span class="math-container">$B$</span> is open (<span class="math-container">$B\in \tau$</span>). If <span class="math-container">$\{x\}$</span> is not open, <span class="math-container">$B$</span> (since it is open) must include, in addition to <span class="math-container">$x$</span>, another element <span class="math-container">$y\neq x$</span>. So any open set <span class="math-container">$U_x$</span> containing <span class="math-container">$x$</span> also includes <span class="math-container">$y$</span>. So there is no open set <span class="math-container">$U_y$</span> containing <span class="math-container">$y$</span> that is disjoint from any open set <span class="math-container">$U_x$</span> including <span class="math-container">$x$</span>, even though <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are distinct elements. Hence <span class="math-container">$(X,\tau)$</span> cannot be Hausdorff.</p>
<p>So if <span class="math-container">$(X,\tau)$</span> is a finite Hausdorff topological space, then the singleton <span class="math-container">$\{x\}$</span> is open for each element <span class="math-container">$x\in X$</span>. Since <span class="math-container">$\tau$</span> includes all arbitrary unions of such singletons, <span class="math-container">$\tau$</span> is the power set for <span class="math-container">$X$</span>, and therefore <span class="math-container">$(X,\tau)$</span> is a discrete topological space.</p>
|
2,796,694 | <p>So for my latest physics homework question, I had to derive an equation for the terminal velocity of a ball falling in some gravitational field assuming that the air resistance force was equal to some constant <em>c</em> multiplied by $v^2.$ <br> So first I started with the differntial equation: <br>
$\frac{dv}{dt}=-mg-cv^2$
<br>
Rearranging to get:
<br>
$\frac{dv}{dt}=-\left(g+\frac{cv^2}{m}\right)$
<br>
From here I tried solving it and ended up with: <br>
$\frac{\sqrt{m}}{\sqrt{c}\sqrt{g}}\arctan \left(\frac{\sqrt{c}v}{\sqrt{g}\sqrt{m}}\right)+C=-t$
<br>
I rearranged this to get:
$v\left(t\right)=\left(\frac{\sqrt{g}\sqrt{m}\tan \left(\frac{\left(-C\sqrt{c}\sqrt{g}-\sqrt{c}\sqrt{g}t\right)}{\sqrt{m}}\right)}{\sqrt{c}}\right)$ <br>
In order to calculate the terminal velocity I took the limit as t approaches infinity:<br>
$\lim _{t\to \infty }\left(\frac{\sqrt{g}\sqrt{m}\tan \:\left(\frac{\left(-C\sqrt{c}\sqrt{g}-\sqrt{c}\sqrt{g}t\right)}{\sqrt{m}}\right)}{\sqrt{c}}\right)$ <br>
This reduces to:
$\frac{\sqrt{g}\sqrt{m}\tan \left(\infty \right)}{\sqrt{c}}$ <br>
The problem with this is that tan $(\infty)$ is indefinite. <br>
Where did I go wrong? Could someone please help properly solve this equation.
<br>
Cheers, Gabriel.</p>
| Phil H | 554,494 | <p>Write the differential equation as a rate of change of velocity with respect to just aerodynamic drag. Then solve for the time it takes for the drag to equal $mg$. </p>
<p>$$\frac{dV}{dt} = \frac{cv^2}{m}$$
$$\frac{v^{-2}}{c}dV = \frac{dt}{m}$$
$$-\frac{1}{cv} = \frac{t}{m} + C$$
Assuming $t=0, v=0$ then.......</p>
<p>$$v = -\frac{m}{ct}$$
When $cv^2 = -mg, v = -\sqrt{\frac{gm}{c}}$
$$-\sqrt{\frac{gm}{c}} = -\frac{m}{ct}$$
$$t = \frac{m}{c\sqrt{\frac{gm}{c}}}$$
Substituting back.......$$v = \sqrt{\frac{gm}{c}}$$
Does this seem reasonable? Assume $c = .5\cdot C_d\cdot \rho\cdot A = .5\cdot 0.3\cdot 1.225\cdot 0.1 = 0.018$ and $m = 0.5\ kg$</p>
<p>$$v = \sqrt{\frac{9.8\cdot 0.5}{0.018}} = 16.5\ m/s$$</p>
<p>Thinking about this it would have been easier just to set $cv^2 = mg$ to get $$v = \sqrt{\frac{gm}{c}}$$</p>
|
3,235,300 | <p>I tried with , whenever <span class="math-container">$x > y$</span> implies <span class="math-container">$p(x) - p(y) =( 5/13)^x (1-(13/5)^{(x-y)}) + (12/13)^x (1- (13/12)^{(x-y)}) > 0 $</span>.
But here I don't understand why the answer is no.</p>
| Peter Szilas | 408,605 | <p>It suffices to show that <span class="math-container">$y=a^x$</span> , <span class="math-container">$0<a<1$</span>, is strictly decreasing(why?)</p>
<p>Set <span class="math-container">$e^{-b}:=a$</span>, <span class="math-container">$b>0$</span>.</p>
<p><span class="math-container">$y= e^{-bx}$</span> is strictly decreasing since</p>
<p><span class="math-container">$y'(x)=( -b)e^{-bx} <0$</span>, <span class="math-container">$x$</span> real.</p>
|
2,405,505 | <p>How to prove that the infinite product $\prod_{n=1}^{+\infty} \left(1-\frac{1}{2n^2}\right)$ is positive ?</p>
<p>Thanks</p>
| Bernard | 202,857 | <p>Just prove the associated log series:
$$\sum_{n=1}^{\infty}\log\Bigl(1-\frac1{2n^2}\Bigr)$$
converges. Observe the general term of this series
$$\log\Bigl(1-\frac1{2n^2}\Bigr)\sim_\infty -\frac1{2n^2}.$$</p>
|
2,405,505 | <p>How to prove that the infinite product $\prod_{n=1}^{+\infty} \left(1-\frac{1}{2n^2}\right)$ is positive ?</p>
<p>Thanks</p>
| H. H. Rugh | 355,946 | <p>Hint: Only the lower limit is a problem. Look e.g. at the product for $n\geq 2$, you could prove:
$$ \prod_{n\geq 2} (1-\frac{1}{2 n^2}) \geq \prod_{n\geq 2} (1-\frac{1}{ n^2})=\prod_{n\geq 2} \frac{(n-1)(n+1)}{ n \cdot n}=\frac12$$
(the last being a telescopic product). So your product is $\geq \frac14$.</p>
|
2,405,505 | <p>How to prove that the infinite product $\prod_{n=1}^{+\infty} \left(1-\frac{1}{2n^2}\right)$ is positive ?</p>
<p>Thanks</p>
| Jack D'Aurizio | 44,121 | <p>The exact value of such product can be derived from the Weierstrass product for the sine function, as already shown by Raffaele. As an alternative approach, we may notice that
$$ \prod_{n\geq 1}\left(1-\frac{1}{2n^2}\right)^2 = \frac{1}{4}\prod_{n\geq 2}\left(1-\frac{1}{n^2}+\frac{1}{4n^4}\right)\geq\frac{1}{4}\prod_{n\geq 2}\frac{n-1}{n}\cdot\frac{n+1}{n} $$
where the last product is a telescopic product:
$$ \prod_{n=2}^{N}\frac{n-1}{n}\cdot\frac{n+1}{n}=\frac{N+1}{2N}\stackrel{N\to +\infty}{\longrightarrow}\frac{1}{2} $$
hence it follows that the value of the original product is $\color{red}{\large\geq\frac{1}{\sqrt{8}}}$.<br>
Such lower bound turns out to be pretty accurate.</p>
|
1,898,803 | <p>So, I looked up this question from G.H. Hardy's <em>A Course of Pure Mathematics</em> and found one examination question from the Cambridge Mathematical Tripos and it has baffled me ever since. I am supposed to sketch</p>
<p>$\lim_{n\to\infty}\dfrac{x^{2n}\sin{(\pi x/2)}+x^2}{x^{2n}+1}$</p>
<p>I have found out that this approaches $\sin(\pi x/2)$ (not sure whether am I right or not) and the graphs look like the image attached below but I am told that there is one discontinuity. So, how do I sketch it correctly?</p>
<p><a href="https://i.stack.imgur.com/A0r8f.png" rel="nofollow noreferrer">The possible graphs of the sequence of function</a></p>
| iamvegan | 118,029 | <p>\begin{align*}
\lim \limits_{n \rightarrow \infty} \frac{x^{2n}\sin(\pi x/2)+x^2}{x^{2n}+1} &= \lim \limits_{n \rightarrow \infty} \frac{\sin(\pi x/2)+x^{2-2n}}{1+x^{-2n}}\\
&=\lim \limits_{n \rightarrow \infty} \frac{\sin(\pi x/2)+0}{1+0}
\end{align*}</p>
|
4,332,812 | <p>I came across this series.</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{n!}{n^n}x^n$$</span></p>
<p>I was able to calculate its radius of convergence. If my calculations are OK, it is the number <span class="math-container">$e$</span>. Is that correct?</p>
<p>Then I started wondering if the series is convergent or divergent for <span class="math-container">$x=\pm e$</span>.</p>
<p>But I think I don't have any means to determine that. Is there any known (standard undergraduate calculus) theorem or theory which I can use for determining that? And also, if the series is convergent for <span class="math-container">$x=\pm e$</span>, how do I calculate its sum?</p>
<p>If there's no general approach here, is there any trick which can be applied in this particular case?</p>
| Koro | 266,435 | <p>Note the limit <span class="math-container">$\frac{{n!}^{\frac 1n}}{n}\to \frac 1e$</span>.</p>
<p>The above limit by <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Hadamard_theorem" rel="nofollow noreferrer">Cauchy- Hadamard theorem</a> gives the radius of convergence of the series <span class="math-container">$\sum_{n=1}^\infty \frac{n!}{n^n}x^n$</span> as <span class="math-container">$e$</span>.</p>
<p>At <span class="math-container">$x=\pm e$</span>, <span class="math-container">$|\frac{n!}{n^n}e^n|\sim\sqrt {2\pi n}$</span> (By <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow noreferrer">Sterling's approximation</a>) and hence by comparison test, the given series does not converge at <span class="math-container">$x=\pm e$</span> (as for convergence of series <span class="math-container">$\sum y_n$</span>, the necessary condition is <span class="math-container">$y_n\to 0$</span>).</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.