qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
321,916 | <p>In order to define Lebesgue integral, we have to develop some measure theory. This takes some effort in the classroom, after which we need additional effort of defining Lebesgue integral (which also adds a layer of complexity). Why do we do it this way? </p>
<p>The first question is to what extent are the notions different. I believe that a bounded measurable function can have a non-measurable "area under graph" (it should be doable by transfinite induction), but I am not completely sure, so treat it as a part of my question. (EDIT: I was very wrong. The two notions coincide and the argument is very straightforward, see Nik Weaver's answer and one of the comments).</p>
<p>What are the advantages of the Lebesgue integration over area-under-graph integration? I believe that behaviour under limits may be indeed worse. Is it indeed the main reason? Or maybe we could develop integration with this alternative approach?</p>
<p>Note that if a non-negative function has a measurable area under graph, then the area under the graph is the same as the Lebesgue integral by <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem" rel="noreferrer">Fubini's theorem</a>, so the two integrals shouldn't behave very differently.</p>
<p>EDIT: I see that my question might be poorly worded. By "area under the graph", I mean the measure of the set of points <span class="math-container">$(x,y) \in E \times \mathbb{R}$</span> where <span class="math-container">$E$</span> is a space with measure and <span class="math-container">$y \leq f(x)$</span>. I assume that <span class="math-container">$f$</span> is non-negative, but this is also assumed in the standard definition of the Lebesuge integral. We extend this to arbitrary function by looking at the positive and the negative part separately.</p>
<p>The motivation for my question concerns mostly teaching. It seems that the struggle to define measurable functions, understand their behaviour, etc. might be really alleviated if directly after defining measure, we define integral without introducing any additional notions.</p>
| Piotr Hajlasz | 121,665 | <p>Actually, in the following book the Lebesgue integral is defined the way you suggested:</p>
<p><strong>Pugh, C. C.</strong> <A HREF="https://link.springer.com/book/10.1007%2F978-3-319-17771-7" rel="noreferrer"><em>Real mathematical analysis</em></A>.
Second edition. Undergraduate Texts in Mathematics. Springer, Cham, 2015. </p>
<p>First we define the planar Lebesgue measure <span class="math-container">$m_2$</span>. Then we define the Lebesgue integral as follows:</p>
<blockquote>
<p><strong>Definition.</strong> The <em>undergraph</em> of <span class="math-container">$f:\mathbb{R}\to[0,\infty)$</span> is <span class="math-container">$$ \mathcal{U}f=\{(x,y)\in\mathbb{R}\times [0,\infty):0\leq y<f(x)\}. $$</span> The
function <span class="math-container">$f$</span> is <em>Lebesgue measurable</em> if <span class="math-container">$\mathcal{U}f$</span> is Lebesgue measurable
with respect to the planar Lebesgue measure and then we define <span class="math-container">$$
\int_{\mathbb{R}} f=m_2(\mathcal{U}f). $$</span></p>
</blockquote>
<p>I find this approach quite nice if you want to have a quick introduction to the Lebesgue integration. For example: </p>
<blockquote>
<p>You get the monotone convergence theorem for free: it is a
straightforward consequence of the fact that the measure of the union
of an increasing sequence of sets is the limit of measures.</p>
</blockquote>
<p>As pointed out by Nik Weaver, the equality <span class="math-container">$\int(f+g)=\int f+\int g$</span> is not obvious, but it can be proved quickly with the following trick:
<span class="math-container">$$
T_f:(x,y)\mapsto (x,f(x)+y)
$$</span>
maps the set <span class="math-container">$\mathcal{U}g$</span> to a set disjoint from <span class="math-container">$\mathcal{U}f$</span>,
<span class="math-container">$$
\mathcal{U}(f+g)=\mathcal{U}f \sqcup T_f(\mathcal{Ug})
$$</span>
and then</p>
<blockquote>
<p><span class="math-container">$$ \int_{\mathbb{R}} f+g= \int_{\mathbb{R}} f +\int_{\mathbb{R}} g $$</span></p>
</blockquote>
<p>follows immediately once you prove that the sets <span class="math-container">$\mathcal{U}(g)$</span> and
<span class="math-container">$T_f(\mathcal{U}g)$</span> have the same measure. Pugh proves it on one page.</p>
|
1,251,537 | <p>$f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ with $g(a)=g(b)=0$. Must $f$ vanish identically?</p>
<hr>
<p>Using integration by parts I got the form:
$\int_a^bg(x)f(x)-g'(x)F(x)=0$. Where $F'(x)=f(x)$.</p>
| Kitegi | 120,267 | <p>By the Stone Weierstrass Theorem, we can show that if $f$ is a continuous function on $[a,b]$ and $$\forall n \in \Bbb N:\ \int_a^b f(x)x^ndx=0$$
then $f=0$</p>
<p>Since we're only allowed to use functions that vanish in $a$ and $b$, then we have $$\forall n \in \Bbb N:\ \int_a^b f(x)(x-a)(b-x)x^ndx=0$$
So $x\rightarrow f(x)(x-a)(b-x) = 0$</p>
<p>So for $x\neq a,b$ $f(x)=0$</p>
<p>By continuity, $f=0$</p>
|
1,251,537 | <p>$f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ with $g(a)=g(b)=0$. Must $f$ vanish identically?</p>
<hr>
<p>Using integration by parts I got the form:
$\int_a^bg(x)f(x)-g'(x)F(x)=0$. Where $F'(x)=f(x)$.</p>
| TonyK | 1,508 | <p>In particular, putting $g(x)=(x-a)(b-x)f(x)$, we have $\int_a^b{(x-a)(b-x)f(x)^2dx}=0$. The integrand is non-negative in $[a,b]$, so $f = 0$ almost everywhere. As $f$ is continuous, $f$ must be identically zero.</p>
|
2,669,524 | <p>I am reading <strong>Algebraic Geometry</strong>, Vol 1, <em>Kenji Ueno</em>. My problem is that
$$k\left[ x,y,t\right]/\left(xy-t\right)\otimes_{k\left[t\right]}k\left[t\right]/\left(t-a\right) \simeq k\left[x,y\right]/\left(xy-a\right) $$
where $k$ is a field and $a$ is an element in $k$. I don't understand how it works. I tried to use the result $$R\otimes_A A/I =R/IR,$$ where $A$ and $R$ are commutative rings and $I$ is an ideal of $A$. Then I obtain
$$LHS=\left[k\left[ x,y,t\right]/\left(xy-t\right)\right]/\left[\left(t-a\right)k\left[ x,y,t\right]/\left(xy-t\right)\right].$$ But now it seems be hard to manage. Please help me. Thank you very much for helps.</p>
| Allawonder | 145,126 | <p>Contrary to what you might find in many contemporary textbooks (especially if they're elementary), you can well think of an infinitesimal as an entity and manipulate it according to its own rules (this has been rigorously set down in the last century in the so-called nonstandard calculus, which in fact I think is the more intuitive way to do calculus as manifest in their use by earlier mathematicians and many today).</p>
<p>I shall address each of your paragraphs in turn:</p>
<p>Yes, you're right. The symbol <span class="math-container">$\Delta$</span> is used in this context to symbolise a difference, or change, in a certain quantity, while the symbol <span class="math-container">$\rm d$</span> is used to mean that this difference is now to be thought of as infinitely small, or infinitesimal. Well, what could this mean? Infinitesimals capture our idea of things like an instant of time, a point of a line, a plane section of a volume, etc. Thus, it's often more easy to think of a varying infinitesimal -- that is, a continuous stream of infinitesimals, as it were. And that's often how we use them since we often take the differentials of varying quantities, the differentials of constants being zero. A semiformal way of talking about these things is to think of them as objects that combine the idea of a quantity with the fact that this quantity is ever <em>approaching</em> zero -- note, not the quantity itself, or the limit (those are functions and numbers respectively), but a combination of the idea of <em>approaching zero</em> and <em>quantity,</em> just like the vectors of physics were new objects to talk of the idea of directed quantities.</p>
<p>So, yes, the notion of size in the usual sense is inapt for an infinitesimal since it's not a number or numerical function, but an instantaneous part of a quantity, as it were; an evanescent quantity. So you can't have a change in volume and a differential change in volume to be ever equal, since <span class="math-container">$\Delta V$</span> is some number when evaluated, and <span class="math-container">$\rm d v$</span> is infinitely small. One may only "approximate" the latter by the former.</p>
<p>Integrating a differential (over a continuum) does not mean you've assigned a size to this differential. When you transform an object, it's no longer that object. So, whenever you integrate a continuum of differentials, for example, the result is no longer a differential, but a definite number. You've effected a transformation which may be intuitively thought of as accumulating all the infinitesimal changes of a certain quantity over a period. For example, think of water flowing from a tap; then the water emerging from the mouth can be thought of as being made up of infinitely many infinitely thin slices of water continuously merged -- these are the differentials, which between any two points in time accumulate to form a definite mass of water. So, a differential and the continuous sum (integral) of differentials are not the same thing, but rather the former have been changed into the latter.</p>
|
976,910 | <p>i'm having a small issue with a certain question. </p>
<p>Given a parametric equation of a plane $x=5-2a-3b$, $y=3-4a+2b$, $z=7-6a-2b$, find a point $P$ on the plane so that the position vector of $P$ is perpendicular to the plane.</p>
<p>How would you go about this for a parametric equation? I think I could convert this to a cartesian equation and dissect an answer that way, but how can I do this without having to convert it?</p>
<p>The hint it gives on the page is that $P$ has the vector $\overrightarrow{OP}$, so I'd imagine the first thing I would do is use the dot product with dummy variables for the $i$, $j$ and $k$ values of $P$. Am I on the right track?</p>
<p>Thanks in advance.</p>
| layman | 131,740 | <p>What can we deduce from $K$ not having any limit points? </p>
<p>Well, that means if we take $x \in X - K$, then we can find some $\epsilon > 0$ such that $B(x, \epsilon) \cap K = \emptyset$ (otherwise, $x$ would be a limit point of $K$). </p>
<p>But $B(x, \epsilon) \cap K = \emptyset \implies B(x, \epsilon) \subseteq X - K$, and so for each $x \in X - K$, we found an open ball around $x$ entirely contained in $X - K$, which shows $X - K$ is open, and thus $K$ is closed.</p>
|
746,180 | <p>I'm working through Stephen Abbott's wonderful <em>Understanding Analysis</em> in preparation for entering a math undergrad degree this fall. A personal note about me: Friends and family tell me I tend to be periphrastic; if there's a long-winded, inelegant way of explaining myself, I'll find it. As I work through Abbott's book, I wonder: Are all the steps I'm taking (even to solve simple problems near the beginning of the book) necessary, or is my brain just doing what it always does by finding the most round-about way to do things? So I'd like to have someone critique a simple proof to see if I'm doing something wrong, or if this really is the way things are done in real analysis. </p>
<blockquote>
<p>Exercise 2.2.5. Let $\lfloor x\rfloor$ be the greatest integer less than or equal to x. Find $lim_{n\to \infty} a_n$ and supply proofs if $a_n=\lfloor \frac 1n \rfloor$.</p>
</blockquote>
<p>In the preceeding chapter, Abbott has already shown that $lim_{n \to \infty} \frac 1n =0$, so we can take this as given. Then we note that since $n \lt (n+1)$ $\forall n \in \mathbb N$, we have $\frac 1n \gt \frac 1{n+1} \gt 0$, $\forall n \in \mathbb N$. And since by inspection $a_n = 1$, we have </p>
<p>$$1\gt a_{n+1} \gt a_{n+2} \gt a_{n+3} \gt \cdots \gt0,$$
so that $a_n = 0$ for $ n \ge2$. Finally, since $|a_n-0|=0$ for $n \ge2$, we must have $|a_n - 0| \lt \epsilon$, $\forall \epsilon \gt 0$ and $n \ge2$. Therefore $\lim_{n \to \infty} a_n=0$.</p>
<p>Is this correct? Have I included any unnecessary steps? It just seems so pathologically nit-picky! And I feel the same way about most of the other exercises in the book. Thanks for your help!</p>
| John Joy | 140,156 | <p>$$\int \frac{1}{x^2 \sqrt{x^2+4}}dx = \int \frac{1}{8(\frac{x}{2})^2 \sqrt{(\frac{x}{2})^2+1}}dx= \int \frac{1}{8tan^2(tan^{-1}(\frac{x}{2})) \sqrt{tan^2(tan^{-1}(\frac{x}{2}))+1}}dx$$</p>
<p>$$=\int \frac{1}{8tan^2(tan^{-1}(\frac{x}{2})) \sqrt{tan^2(tan^{-1}(\frac{x}{2}))+1}}dx\frac{\frac{d(tan^{-1}(\frac{x}{2}))}{dx}}{\frac{d(tan^{-1}(\frac{x}{2}))}{dx}}$$</p>
<p>$$=\int \frac{1}{8tan^2(tan^{-1}(\frac{x}{2})) \sqrt{tan^2(tan^{-1}(\frac{x}{2}))+1}}\frac{d(tan^{-1}(\frac{x}{2}))}{\frac{1}{2}\frac{1}{(\frac{x}{2})^2+1}}$$</p>
<p>$$=\int \frac{(\frac{x}{2})^2+1}{4tan^2(tan^{-1}(\frac{x}{2})) \sqrt{tan^2(tan^{-1}(\frac{x}{2}))+1}}d(tan^{-1}(\frac{x}{2})$$</p>
<p>$$=\int \frac{tan^2(tan^{-1}(\frac{x}{2}))+1}{4tan^2(tan^{-1}(\frac{x}{2})) \sqrt{tan^2(tan^{-1}(\frac{x}{2}))+1}}d(tan^{-1}(\frac{x}{2})$$</p>
<p>$$=\int \frac{sec^2(tan^{-1}(\frac{x}{2}))}{4tan^2(tan^{-1}(\frac{x}{2})) sec(tan^{-1}(\frac{x}{2}))}d(tan^{-1}(\frac{x}{2})$$</p>
<p>$$=\int {\frac{1}{4}cot(tan^{-1}(\frac{x}{2}))csc(tan^{-1}(\frac{x}{2}))}d(tan^{-1}(\frac{x}{2})$$
$$=-\frac{1}{4}csc(tan^{-1}(\frac{x}{2}))+C$$
skipping ahead
$$=-\frac{1}{4}\sqrt{1+\frac{1}{tan^2(tan^{-1}(\frac{x}{2}))}}+C$$
$$=-\frac{1}{4}\sqrt{1+\frac{1}{x^2/4}}+C$$
$$=-\frac{\sqrt{x^2+4}}{4x}+C$$</p>
<p>Does everyone see now why we use substitutions?</p>
|
148,420 | <p>A simple concept but I've not been able to solve it. I'm trying to create a stack of 2D plots in 3D space using Mathematica 9. <strong>This is not a parametric plot</strong>, but I'm creating it from an array of vectors (imported
.csv file). The ListPlot3D function creates a filled mesh but what I want is this type of plot (created by HYRY in StackExchange: <em>'Matplotlib plot pulse propagation in 3d'</em>):</p>
<p><a href="https://i.stack.imgur.com/iv2Ty.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iv2Ty.png" alt="example taken from StackExchange question: 'Matplotlib plot pulse propagation in 3d'"></a></p>
<p>I have tried changing the function options for ListPlot3D and was going to create an array of plot images (.jpg) to stack in 3D, each one having an alpha value - but that would not be good. Any help is appreciated.</p>
<p>Thank you,</p>
<p>Marc </p>
| yohbs | 367 | <p>You can also use <a href="http://reference.wolfram.com/language/ref/ParametricPlot3D.html" rel="nofollow noreferrer"><code>ParametricPlot3D</code></a>:</p>
<pre><code>f[x_, y_] := Exp[-x^2 - y^2/(4 + x^2/4) + x y];
ParametricPlot3D[Table[{x, y, f[x, y]}, {y, -5, 5}], {x, -5, 5},
PlotRange -> All]
</code></pre>
<p><a href="https://i.stack.imgur.com/o7h2U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o7h2U.png" alt="enter image description here"></a> ]</p>
|
138,203 | <p>Assume I export some data into a file like:</p>
<pre><code>data = {{1, 2, 3}, {4, 5, 6}};
Export["test.h5",data,{"Datasets","/h1"}];
</code></pre>
<p>How can I append <code>{7, 8, 9}</code> to the "test.h5" (by directly writing in the test.h5) such that the results for </p>
<pre><code>Import["test.h5", data, {"Datasets", "/h1"}]
</code></pre>
<p>will be</p>
<pre><code>{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
</code></pre>
<p>I would prefer to add it at the end of /h1.</p>
| Albert Retey | 169 | <p>Extending an existing dataset is not possible with the standard Mathematica <code>Export</code>, at least not with any version up to 11.0.1. What does work is to add additional datasets one by one like this:</p>
<pre><code>filename = FileNameJoin[{$HomeDirectory, "Desktop", "tst.h5"}]
Export[filename, {{1, 2, 3}}, {"Datasets", "one"}]
Export[filename, {{1, 2, 3}}, {"Datasets", "two"}, "Append" -> True]
</code></pre>
<p>using: </p>
<pre><code>Import[filename]
</code></pre>
<p>you can verify that there are now two datasets in the file. Of course that is not the same thing, but the best that currently is possible and probably good enough for some use cases. If you need more, there are some libraries which might be possible to do what you need. Alternatively you could write your own function to access HDF5 files via e.g. LibraryLink. If you search for HDF5 on this site you will find several questions and answer to guide you either way.</p>
|
138,203 | <p>Assume I export some data into a file like:</p>
<pre><code>data = {{1, 2, 3}, {4, 5, 6}};
Export["test.h5",data,{"Datasets","/h1"}];
</code></pre>
<p>How can I append <code>{7, 8, 9}</code> to the "test.h5" (by directly writing in the test.h5) such that the results for </p>
<pre><code>Import["test.h5", data, {"Datasets", "/h1"}]
</code></pre>
<p>will be</p>
<pre><code>{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
</code></pre>
<p>I would prefer to add it at the end of /h1.</p>
| Thunderbird | 67,254 | <p>It seems like it has changed in Version 12.0.0.0</p>
<p>"Append" -> True</p>
<p>is now implemented as</p>
<p>OverwriteTarget->"Append"</p>
|
3,162,464 | <p>I need help making an OGF for <span class="math-container">$1 + x^i + x^{2i}+...+x^{ki}$</span>. I already know how to verify that <span class="math-container">$1 +x +x^2+...+x^k$</span> can be written by <span class="math-container">$({1-x^{k+1}})/({1-x})$</span>. I'm wondering if there is any correlation between the two..?</p>
<p>Any help would be greatly appreciated. Thanks!</p>
| PrincessEev | 597,568 | <p>Notice that</p>
<p><span class="math-container">$$1 + x^i + x^{2i} + x^{3i} + ... + x^{ki} = 1 + (x^i)^1 + (x^i)^2 + (x^i)^3 + ... + (x^i)^k$$</span></p>
<p>This is a finite geometric series with ratio <span class="math-container">$x^i$</span>, and thus</p>
<p><span class="math-container">$$1 + x^i + x^{2i} + x^{3i} + ... + x^{ki} = 1 + (x^i)^1 + (x^i)^2 + (x^i)^3 + ... + (x^i)^k = \frac{1 - (x^i)^{k+1}}{1 - x^i}$$</span></p>
|
858,952 | <p>related to <a href="https://math.stackexchange.com/questions/830599/one-sided-limit-lim-x-rightarrow-0-fx-where-wolfram-alpha-does-not-hel">this question</a>:</p>
<p>Is there an easy closed-form term for</p>
<p>$$\sum_{j=k}^{\infty} \frac{x^j}{j!}e^{-x},$$</p>
<p>thus when the sum starts at a constant $k$ instead of $1$?</p>
<p>EDIT:
Thanks for your help. Is there a Chance to solve this sum-term? Because this is not really what I expect, when I talk about a closed-form term. </p>
<p>A Little bit more of context might help, maybe:</p>
<p>I have $$f(n,p)=\sum_{j=k}^{\infty} \frac{(np)^j}{j!} e^{-np}$$ and it is meant that the partial Derivation is $$\frac{\delta f(n,p)}{\delta n}=\frac{p (np)^{k-1}}{(k-1)!}e^{-np}$$ but I have no idea how to get to this. </p>
<p>Because to me: </p>
<p>$$\frac{\delta f(n,p)}{\delta n}=\sum_{j=k}^{\infty} \left( \frac{p (np)^{j-1}}{j!} e^{-np} -\frac{p (np)^j}{j!} e^{-np} \right)$$
but then I am stuck.</p>
| draks ... | 19,341 | <p>$$
\sum_{j=k}^{\infty} \frac{x^j}{j!}e^{-x}=
\left(\sum_{j=k}^{\infty} \frac{x^j}{j!}+\sum_{j=0}^{k-1} \frac{x^j}{j!}-\sum_{j=0}^{k-1} \frac{x^j}{j!}\right)e^{-x}\\
=\left(e^x-\sum_{j=0}^{k-1} \frac{x^j}{j!}\right)e^{-x}=1-e^{-x}\sum_{j=0}^{k-1} \frac{x^j}{j!}
$$</p>
|
956,235 | <p>This may be a little low-brow for this forum, but I'm trying to figure out what the common base number set is between two other sets of numbers. Here's the situation: I have received quotes from two vendors for a list of products that they sell, and the prices they have quoted are:</p>
<pre><code> Vendor 1's Price Vendor 2's Price
Item #1 $9.76 $9.12
Item #2 $15.60 $14.56
Item #3 $9.76 $9.12
Item #4 $15.60 $14.56
Item #5 $9.76 $9.12
</code></pre>
<p>Each vendor is taking a certain "list price" and each is applying its own margin. Is there a way to figure out what the "list price" is that these two vendors are working off of?</p>
<p>Thanks very much!!!</p>
| Leucippus | 148,155 | <p>Given $s(t) = 32 + 112 t - 16 t^{2}$ then $v(t)$, being the derivative of $s(t)$, is $v(t) = 112 - 32 t$. If $v(t) = 0$ then $t = 7/2$. Now, $s(7/2) = 32 + 56 \cdot 7 - 4 \cdot 49 = 228$. </p>
<p>To find the velocity at impact solve for $s(t) = 0$. This yields $16 t^{2} - 112 t - 32 = 0$, or $t^{2} - 7 t - 2 = 0$. Solving this equation leads to the two possible values
\begin{align}
t = \frac{7}{2} \pm \frac{\sqrt{49+8}}{2} = \frac{7}{2} \pm \frac{\sqrt{57}}{2}.
\end{align}
The negative component is negative and is to be tossed out of consideration leaving $2 t_{I} = 7 + \sqrt{57}$. Now,
\begin{align}
v(t_{I}) = 112 - 32 t_{I} = 112 - 16 \cdot 7 - 16 \cdot \sqrt{57} = - 16 \sqrt{57} = -120.79735\cdots .
\end{align}</p>
|
748,325 | <p>In order to prove non-uniqueness of singular vectors when a repeated singular value is present, the book (Trefethen), argues as follows: Let $\sigma$ be the first singular value of A, and $v_{1}$ the corresponding singular vector. Let $w$ be another linearly independent vector such that $||Aw||=\sigma$, and construct a third vector $v_{2}$ belonging to span of $v_{1}$ and $w$, and orthogonal to $v_{1}$. All three vectors are unitary, so $w=av_{1}+bv_{2}$ with $|a|^2+|b|^2=1$, and $v_{2}$ is constructed (Gram-Schmidt style) as follows:</p>
<p>$$ {v}_{2}= \dfrac{{w}-({v}_{1}^{T} w ){v}_{1}}{|| {w}_{1}-({v}_{1}^{T} {w} ){v}_{1} ||_{2}}$$</p>
<p>Now, Trefethen says, $||A||=\sigma$, so $||Av_{2}||\le \sigma$ but this must be an equality (and so $v_{2}$ is another singular vector relative to $\sigma$), since otherwise we would have $||Aw||<\sigma$, in contrast with the hypothesis.</p>
<p>How that? I cannot see any elementary application of triangle inequality or Schwarz inequality to prove this claim.</p>
<p>I am pretty well convinced of partial non-uniqueness of SVD in certain situations. Other proofs are feasible, but I wish to undestand this specific algebraic step of this specific proof.</p>
<p>Thanks.</p>
| epi163sqrt | 132,007 | <p><em>Note:</em> Please note that this answer was initially incorrect. Thanks to <a href="https://math.stackexchange.com/users/40119/littleO">littleO</a> who draw attention to my mistake. The essential argument at the end is now based on the answer already stated by <a href="https://math.stackexchange.com/users/15381/ewan-delanoy">Ewan Delanoy</a>. So, in fact the proof from Trefethen remains <em>mysterious</em> for me too :-;</p>
<hr>
<p>I'm not sure if the arguments of Trefethen regarding the part of the proof you are interested in are sufficient.</p>
<p>In order to be able to consider the text carefully, the following paragraph quotes verbatim the relevant part of the proof of Theorem 4.1 from his <a href="http://javierolivares.files.wordpress.com/2009/04/numerical-linear-algebra-trefethenbau.pdf" rel="nofollow noreferrer">Numerical Linear Algebra</a>.</p>
<blockquote>
<p><em>From Numerical Linear Algebra, part of proof of Theorem 4.1 (Trefethen):</em></p>
<p>First we note that $\sigma_1$ is uniquely determined by the condition that it is equal to $\left\Vert A\right\Vert_2$, as follows from $(4.4)$. Now suppose, that in addition to $v_1$, there is another linearly independent vector $w$ with $\left\Vert w\right\Vert_2=1$ and $\left\Vert Aw\right\Vert_2=\sigma_1$. Define a unit vector $v_2$, orthogonal to $v_1$, as a linear combination of $v_1$ and $w$,
$$v_2=\frac{w-\left(v_1^{\ast}w\right)v_1}{\left\Vert w_1-\left(v_1^{\ast}w\right)v_1\right\Vert_2}$$
Since $\left\Vert A\right\Vert_2=\sigma_1$, $\left\Vert Av_2\right\Vert_2\leq\sigma_1$; but this must be an equality, for otherwise, since $w=v_{1}c+v_{2}s$ for some constants $c$ and $s$ with $\left\vert c\right\vert^2+\left\vert s\right\vert^2=1$, we would have $\left\Vert Aw\right\Vert_2<\sigma_1$.</p>
</blockquote>
<p>Let's analyse the arguments of Trefethen relatively detailed:</p>
<p><strong>First step:</strong> <em>Starting Situation</em></p>
<p>We know from the beginning of the proof (not stated here) that there is a vector $v_1$ with $\left\Vert v_1\right\Vert_2=1$ and we also set $\sigma_1=\left\Vert A\right\Vert_2$. According to the text above we further assume that this vector is a singular vector with $\left\Vert Av_1\right\Vert_2=\sigma_1$.</p>
<p><strong>Second step:</strong> <em>Uniqueness via indirect argument</em></p>
<p>We consider now (indirect argument) <em>any</em> vector $w$ which is linearly independent to $v_1$ and which fulfills in addition to $v_1$, $\left\Vert w\right\Vert_2=1$ and $\left\Vert Aw\right\Vert_2=\sigma_1$.</p>
<p><strong>Third step (main idea of the proof):</strong> <em>Create a singular vector $v_2$ violating the distinctness assumption of the singular values stated in the theorem</em></p>
<p>Based on the assumption that $\left\Vert Aw\right\Vert_2=\sigma_1$ we create a second <em>singular vector</em> $v_2$, which <em>also</em> fulfills $\left\Vert Av_2\right\Vert_2=\sigma_1$ and so violates the precondition of distinctness of the singular values.</p>
<p>Since $v_1$ and $w$ are linearly independent, we can create (e.g. with Gram-Schmidt) a vector $v_2$ with $\left\Vert v_2\right\Vert =1$ and
which is orthogonal to $v_1$. Since all vectors $u$ with $\left\Vert u\right\Vert =1$ fulfill by definition (of the supremum) $\left\Vert Au\right\Vert_2 \leq \left\Vert A\right\Vert_2$, we get $\left\Vert Av_2\right\Vert_2 \leq \left\Vert A\right\Vert_2=\sigma_1$.</p>
<blockquote>
<p>Now with the help of $w$ we can show that
$v_2$ even has to fulfill $\left\Vert Av_2\right\Vert_2 = \left\Vert A\right\Vert_2=\sigma_1$, since if otherwise $\left\Vert Av_2\right\Vert_2 <\sigma_1$ we get</p>
<p>\begin{align}
\left\Vert Aw_2\right\Vert_2^{2}&=\left\Vert A(c v_1+s v_2)\right\Vert^{2}_2\\
&\leq\left\vert c\right\vert^2\left\Vert Av_1\right\Vert_2^2+2\mathsf{Re}\left(c\bar{s}\left<Av_1,Av_2\right>\right)+\left\vert s\right\vert^2\left\Vert Av_2\right\Vert_2^2\\
&=\left\vert c\right\vert^2\sigma_1^2+\left\vert s\right\vert^2\underbrace{{\left\Vert Av_2\right\Vert_2^2}}_{<\sigma_1^2}\qquad(\ast)\\
&<\sigma_1^2\left(\left\vert c\right\vert^2+\left\vert s\right\vert^2\right)\\
&=\sigma_1^2
\end{align}</p>
</blockquote>
<p>and this contradicts our assumption that $\left\Vert Aw_2\right\Vert_2=\sigma_1$. </p>
<p>So we get a <em>second singular vector</em> $v_2$ with $\left\Vert Av_2\right\Vert_2=\sigma_1$ and this violates the condition of <em>distinct</em> singular values stated in the theorem.</p>
<hr>
<p><em>Please note, that in line ($\ast$) the essential argument from <a href="https://math.stackexchange.com/users/15381/ewan-delanoy">Ewan Delanoy</a> which states that $Av_1$ and $Av_2$ are orthogonal and so the inner product vanishes is used. I don't see a proper argument in the proof from Trefethen, which I could use instead.</em></p>
|
2,900,294 | <p>I tried this and I only got $\sin( 53^\circ)= \sin( 127^\circ).$ How do I find the equal value in cosine or tangent? Please help me out. Thank you!</p>
| paulplusx | 578,155 | <p>Use:</p>
<p>$sin(90^\circ + \theta )= cos(\theta)$</p>
<p>$sin(90^\circ - \theta )= cos(\theta)$.</p>
<p>So:</p>
<p>$sin(53^\circ)=sin(90^\circ - 37^\circ )= cos(37^\circ)$.</p>
<p>It would better if you learn more about them here <a href="https://www.khanacademy.org/math/trigonometry/trigonometry-right-triangles/sine-and-cosine-of-complementary-angles/a/sine-and-cosine-are-cofunctions" rel="nofollow noreferrer">Khanacademy</a>.</p>
|
1,043,094 | <p>I have to find the limit of following</p>
<p><span class="math-container">$$\lim_{x \to 0}\left(\frac{1}{x} - \frac{1}{x^2}\right)$$</span></p>
<p>I have no idea how to start this one off.
How would I do it?</p>
<p>Do I just substitute the <span class="math-container">$0$</span>? It doesn't look that easy and simple. The answer says it's negative infinity.</p>
<p>Please show me a solution without graphing(unless for better explanation).</p>
| Aaron Meyerowitz | 84,560 | <p>For $f(x)=\frac{1}{x}-\frac{1}{x^2}$, if you want to know what $\lim_{x \rightarrow 0}f(0)$ is, try looking at $f(\frac{1}{10}),f(\frac{-3}{100})$ and selected similar values. That will start you off and give you an idea what the answer might be (or why the book says that the answer is what it is.) Once you have decided that the answer is $- \infty$ you might want to prove it. Graphing is, in some sense, a way to look at $f(x)$ for lots of values at the same time. </p>
|
1,043,094 | <p>I have to find the limit of following</p>
<p><span class="math-container">$$\lim_{x \to 0}\left(\frac{1}{x} - \frac{1}{x^2}\right)$$</span></p>
<p>I have no idea how to start this one off.
How would I do it?</p>
<p>Do I just substitute the <span class="math-container">$0$</span>? It doesn't look that easy and simple. The answer says it's negative infinity.</p>
<p>Please show me a solution without graphing(unless for better explanation).</p>
| Milo Brandt | 174,927 | <p>A useful thing to do would be to make the substitution $u=\frac{1}x$. Then, this becomes
$$\lim_{u\rightarrow\infty}u-u^2$$
(or the analogous limit to $-\infty$) but $u$ grows much more slowly than $u^2$, the expression in the limit must decrease without bound - in particular, since $u^2>2u$ if $u>2$, we get that $u-u^2<-u$ if $u$ is at least $2$, so the limit is bounded above by $-u$ which goes to $-\infty$.</p>
|
1,043,094 | <p>I have to find the limit of following</p>
<p><span class="math-container">$$\lim_{x \to 0}\left(\frac{1}{x} - \frac{1}{x^2}\right)$$</span></p>
<p>I have no idea how to start this one off.
How would I do it?</p>
<p>Do I just substitute the <span class="math-container">$0$</span>? It doesn't look that easy and simple. The answer says it's negative infinity.</p>
<p>Please show me a solution without graphing(unless for better explanation).</p>
| egreg | 62,967 | <p>As written, the limit is in the so-called “indeterminate form $\infty-\infty$”), so we want to rewrite it in another way to start with:
$$
\frac{1}{x}-\frac{1}{x^2}=\frac{x-1}{x^2}
$$
It's not restrictive to work under the assumption that $-1<x<1$; thus $|x|<1$ and $|x|^2<|x|$, that is to say
$$
\frac{1}{x^2}>\frac{1}{|x|}
$$
Since $\lim_{x\to0}(x-1)=-1$, we can restrict ourselves to an interval around $0$ where $x-1<-1/2$, so
$$
\frac{x-1}{x^2}<\frac{-1/2}{x^2}<-\frac{1}{2|x|}
$$
Since
$$
\lim_{x\to0}-\frac{1}{2|x|}=-\infty
$$
we are done.</p>
<hr>
<p>However, this can be stated in greater generality; if you know that</p>
<ol>
<li>$\displaystyle\lim_{x\to a}f(x)=l>0$ (possibly $l=\infty$)</li>
<li>$\displaystyle\lim_{x\to a}g(x)=0$</li>
<li>$g(x)>0$ in a neighborhood of $a$ ($a$ excluded)</li>
</ol>
<p>then
$$
\lim_{x\to a}\frac{f(x)}{g(x)}=\infty
$$</p>
<p>Note that the limit can also be for $x\to a^+$ or $x\to a^-$; changing into $l<0$ or $g(x)<0$ is easy with the “rule of signs”.</p>
<p>The proof is just the same as before: since $\lim_{x\to a}f(x)=l>0$, we can restrict ourselves to a (punctured) neighborhood of $a$ where $f(x)>k$ for some $k>0$. Then, since $\lim_{x\to a}g(x)=0$, for any $M>0$ we can choose $\delta>0$ so that, for $0<|x-a|<\delta$, $|g(x)-0|<k/M$. Thus, as we can also assume $g(x)>0$, $1/g(x)>M/k$ and
$$
\frac{f(x)}{g(x)}>k\frac{M}{k}=M
$$
This is exactly proving that $\lim_{x\to a}f(x)/g(x)=\infty$.</p>
|
864,237 | <p>Let's take a short exact sequence of groups
$$1\rightarrow A\rightarrow B\rightarrow C\rightarrow 1$$
I understand what it says: the image of each homomorphism is the kernel of the next one, so the one between $A$ and $B$ is injective and the one between $B$ and $C$ is surjective. I get it. But other than being a sort of curiosity, what is it really telling me?</p>
| Hagen von Eitzen | 39,174 | <p>It's getting more interesting as soon the diagrams get more involved. For example: If the following diagram with exact rows commutes, and the outer columns are both mono/epi/iso-morphism, then the middle one is also a mono/epi/iso-morphism:
$$
\begin{matrix}
0\to &A&\to &B&\to &C&\to 0\\
&\downarrow&&\downarrow&&\downarrow&\\
0\to &A'&\to& B'&\to &C'&\to 0\\
\end{matrix}$$
Translating this statement fully into kernel/image lingo, would make it much less graspable.</p>
|
1,725,084 | <p>I am currently trying to practice the technique of transfinite induction with the following problem: </p>
<p>Suppose that $X$ is a non-empty subset of an ordinal $\alpha$, so that $X$ is well-ordered by $\in$. Show that $\text{type}(X; \in) \leq \alpha$. </p>
<hr>
<p>My approach thus far: </p>
<p>Let $\beta = \text{type}(X; \in)$ and $f: X \rightarrow \beta$ be an order-preserving isomorphism. Now we show that $f(\xi) \leq \xi$ for all $\xi \in X$ by transfinite induction. </p>
<p>Base Case: Let $\xi_{0} \in X$ be minimal with respect to $\in$ in $X$. As $f$ preserves order, it must be the case that $f(\xi_{0}) = \emptyset$ and so $f(\xi_{0}) \leq \xi_{0}$. </p>
<p>Inductive Step: Suppose that $f(\xi) \leq \xi$ for all $\xi < \gamma$ for some $\gamma$. Now we deduce that $f(\gamma) \leq \gamma$. </p>
<blockquote>
<p>My question is how to prove this crucial step $f(\gamma) \leq \gamma$. </p>
</blockquote>
<p>After proving this, then by transfinite induction we have that $f(\xi) \leq \xi$ for all $\xi \in X$ and so $\beta = f(X) \subseteq \alpha$ and so $\text{type}(X; \in) = \beta \leq \alpha$ as desired. </p>
| hmakholm left over Monica | 14,366 | <p>Things will go a bit smoother if you strengthen the induction hypothesis to include "... and $f(X\cap\gamma)$ is an initial segment of $\beta$".</p>
<p>If $\gamma\notin X$ then there's nothing new to prove. So assume that $\gamma\in X$.</p>
<p>Now, for $f$ to be an order isomorphism, $f(\gamma)$ has to be the smallest ordinal that is not an $f(\xi)$ with $\xi<\gamma$. Due to the induction hypothesis we cannot have $f(\xi)=\gamma$ for any such $\xi$, so $f(\gamma)$ is the minimum of a class of ordinals that includes $\gamma$ itself. Therefore $f(\gamma)\le \gamma$.</p>
<hr>
<p>By the way, you don't need any explicit base case when you're doing ordinal induction. The induction scheme itself supplies the necessary base case -- the induction hypothesis "such-and-such holds for all $\xi<\alpha$" is vacuously true (and therefore not helpful) when $\alpha=0$.</p>
|
949,512 | <p>How do mathematicians define inner product on a vector space. </p>
<p>For example: $a = (x_1,x_2)$ & $ b =(y_1,y_2) $ in $ \mathbb{R}^2.$ </p>
<p>Define $\langle a,b\rangle= x_1y_1-x_2y_1-x_1y_2+4x_2y_2$. It's an inner product.</p>
<p>But how does one motivate this inner product? I think there is some sort of matrix multiplication between some vectors.</p>
| rych | 73,934 | <p>Inner product $\langle u,v\rangle$ doesn't depend on the choice of basis. In the given basis $\langle u,v \rangle =x^TMy$.</p>
<p>Every real symmetric matrix is orthogonally diagonalizable: $M=Q^TDQ$, and in the new, "rotated", coordinates $\langle u,v\rangle=x^TMy=x^TQ^TDQy=(Qx)^TD(Qy)=\tilde{x}^TD\tilde{y}$.</p>
<p>The diagonal matrix $D$ consists of the (positive) eigenvalues of the original matrix $M$. By applying a non-uniform scaling transformation one could use the scaled coordinates to write a more familiar $\langle u,v\rangle=\hat{x}^T\hat{y}$.</p>
|
576,271 | <p>I integrated to get $\frac{3}{1-x}$, turned it into a power series $3x^n$, and differentiated to get the series $3nx^{n-1}$ which is incorrect.</p>
| Ross Millikan | 1,827 | <p><a href="http://www.wolframalpha.com/input/?i=series%203/%281-x%29%5E2" rel="nofollow">Wolfram</a> agrees with you that $\frac 3{(1-x)^2}=\sum_{n=1}^{\infty} 3nx^{n-1}$</p>
|
576,271 | <p>I integrated to get $\frac{3}{1-x}$, turned it into a power series $3x^n$, and differentiated to get the series $3nx^{n-1}$ which is incorrect.</p>
| Sammy Black | 6,509 | <p>Shift the index:
$$
\frac{3}{(1 - x)^2} = \sum_{n = 1}^{\infty} 3nx^{n - 1} = \sum_{m = 0}^{\infty} 3(m + 1)x^m.
$$</p>
<p>(Here we've let $m = n - 1$, so $n = m + 1$.)</p>
|
2,218,924 | <p>$ \displaystyle \lim_{n\to \infty} \sum_{k=1}^n \frac{k^4}{n^4}=$ ?</p>
<p>I found it difficult to tranform it into the integral form by the definition of Riemann sum, which is a way to solve similar problems.</p>
| Community | -1 | <p>There is probably a typo.</p>
<p>By the Riemannian summation,</p>
<p>$$\lim_{n\to\infty}\frac1n\sum_{k=1}^n\frac{k^p}{n^p}=\int_0^1 x^pdx=\frac1{p+1}.$$</p>
|
1,370,576 | <p>I am working on a trigonometry question at the moment and am very stuck. I have looked through all the tips to solving it and I cant seem to come up with the right answer. The problem is </p>
<blockquote>
<p>What is exact value of<br>
$$\cot \left(\frac{7\pi}{6}\right)? $$</p>
</blockquote>
| Emilio Novati | 187,568 | <p>Hint: $\cot x=\dfrac{\cos x}{\sin x}$ and</p>
<p>$$
\cos \left(\dfrac{7 \pi}{6}\right)=\cos \left(\pi+\dfrac{\pi}{6}\right)
$$</p>
<p>$$
\sin \left(\dfrac{7 \pi}{6}\right)=\sin \left(\pi+\dfrac{\pi}{6}\right)
$$</p>
<p>now you can use sum formulas or reduction to the first quadrant.</p>
|
274,961 | <p>I want to calculate the determinant along the last slice of a 3-dimensional array. So for I do this by slow the <code>Table</code> command. I know that for time reasons I should use <code>Map</code> or <code>Apply</code>, however couldn't successful solve the problem.</p>
<pre><code>m = 2; n = 3; o = 10;
SeedRandom[0];
x = RandomReal[{-1, 1}, {m, n, o}];
Table[Sqrt[Det[x[[;;,;;,i]].Transpose[x[[;;,;;,i]]]]],{i,1,o}]
Map[Sqrt[Det[# . Transpose[#]]] &, {x}, {2}]
(*{1.04663,0.437045,0.479911,0.260814,0.205563,0.171896,1.20112,1.00502,0.855893,0.125758}*)
(*{{5.38795,4.19589}}*)
</code></pre>
| userrandrand | 86,543 | <p><strong>Update</strong></p>
<p>I was curious to see what the speed would be if one used the neural network framework to do the computation. It turns out that it is faster than using <code>Map</code> and the computation is particularly fast with a GPU. However, error accumulates with the number of components</p>
<p>The network will be made of a single function layer reproducing the code of @DanielHuber but within the framework of neural networks (without much of a network as there will not be any weights or complicated structure).</p>
<p>The tensor/array on which the function will operate has the same structure as in the question but with more components :</p>
<pre><code>m = 2; n = 3; o = 10^6;
SeedRandom[0];
x = RandomReal[{-1, 1}, {m, n, o}];
</code></pre>
<p>The function using a neural network layer and higher order operator.</p>
<pre><code>f = FunctionLayer[Sqrt@Det[# . Transpose[#]] &];
mapf = NetMapOperator[f];
</code></pre>
<p>Comparison between the two methods</p>
<pre><code>mapf[Transpose[x, {2, 3, 1}]]; // AbsoluteTiming
Map[Sqrt[Det[# . Transpose[#]]] &, Transpose[x, {2, 3, 1}]]; // AbsoluteTiming
</code></pre>
<p><code>(* {0.22, Null} *)</code></p>
<p><code>(* {1.1, Null} *)</code></p>
<p>Using a GPU (NVIDIA RTX 3050 Laptop) as the target device:</p>
<p><code>(* {0.067, Null} *)</code></p>
<p>However, the error between the two methods increases with the number of components on which <code>Map</code> operates (tensor length/dimension):</p>
<pre><code>Table[m = 2; n = 3;
SeedRandom[0];
x = RandomReal[{-1, 1}, {m, n, o}];
{o, Max@
Abs[Map[Sqrt[Det[# . Transpose[#]]] &, Transpose[x, {2, 3, 1}]]/
mapf[Transpose[x, {2, 3, 1}]] - 1]}, {o,
Floor[10^(Subdivide[1, 6, 20])]}] //
ListLogLogPlot[#, PlotRange -> All] &
</code></pre>
<p><strong>Maximum relative error (vertical axis) vs tensor dimension on which Map operates (horizontal axis)</strong></p>
<p><a href="https://i.stack.imgur.com/W4ldH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W4ldH.png" alt="error" /></a></p>
<hr />
<h2>Previous version</h2>
<p>(still interesting for the syntax)</p>
<p>This is a natural task for <a href="https://reference.wolfram.com/language/ref/ArrayReduce.html" rel="nofollow noreferrer"><code>ArrayReduce</code></a> introduced in 2020 (version 12.1). However, it is experimental and significantly slower than just using <code>Table</code> (for example using <code>Norm</code> with the example provided by OP <code>ArrayReduce</code> <strong>was 16 times slower</strong> than <code>Table</code>).</p>
<p>Thus using <code>ArrayReduce</code> at this moment is more for the convenience of writing code quickly than having high performance.</p>
<h3>Discussion on how to use <code>ArrayReduce</code></h3>
<p>(code section below)</p>
<p>In the original <code>Table</code> version in the question, applying a function <code>h</code> to the two first levels of an array <code>Array[a, {2, 2, 3}]</code> would be:</p>
<pre><code>Table[h@Array[a, {2, 2, 3}][[All, All, s]], {s, 3}]
</code></pre>
<p>To visualize the output of that code one can include a composition with <code>MatrixForm</code>:</p>
<p><strong>Note :</strong> <em>This is only to visualize the output, using <code>MatrixForm</code> for purposes other than visualization, in particular within calculations, can lead to problems. One may search for such examples here on stack exchange</em>.</p>
<pre><code>Table[h@MatrixForm@Array[a, {2, 2, 3}][[All, All, s]], {s, 3}]
</code></pre>
<p>Out: <span class="math-container">$$\left\{h\left(\left(
\begin{array}{cc}
a(1,1,1) & a(1,2,1) \\
a(2,1,1) & a(2,2,1) \\
\end{array}
\right)\right),h\left(\left(
\begin{array}{cc}
a(1,1,2) & a(1,2,2) \\
a(2,1,2) & a(2,2,2) \\
\end{array}
\right)\right),h\left(\left(
\begin{array}{cc}
a(1,1,3) & a(1,2,3) \\
a(2,1,3) & a(2,2,3) \\
\end{array}
\right)\right)\right\}$$</span></p>
<p>How can one do this, <em>functional style</em> ?</p>
<pre><code>ArrayReduce[h@*MatrixForm, Array[a, {2, 2, 3}], {{1}, {2}}]
</code></pre>
<p><code>ArrayReduce</code> computed the function <code>h</code> on the levels 1 and 2 where the input of the function <code>h</code> is a matrix made from these levels.</p>
<p>Why all those <code>{}</code> ? Why not just <code>{1,2}</code> instead of <code>{{1},{2}}</code> ?</p>
<p>Well, let's check</p>
<pre><code>ArrayReduce[h@*MatrixForm, Array[a, {2, 2, 3}], {1, 2}]
</code></pre>
<p><span class="math-container">$$\{h(\{a(1,1,1),a(1,2,1),a(2,1,1),a(2,2,1)\}),h(\{a(1,1,2),a(1,2,2),a(2,1,2),a(2,2,2)\}),h(\{a(1,1,3),a(1,2,3),a(2,1,3),a(2,2,3)\})\}$$</span></p>
<p>The output is the same as if we had kept the <code>{{1},{2}}</code> but we flattened the input matrix before feeding it to <code>h</code>.</p>
<p>Maybe it might be hard to figure out how this generalizes so let's add another dimension.</p>
<pre><code>ArrayReduce[h@*MatrixForm, Array[a, {2, 2, 2, 2}], {1, 2, 3}]
</code></pre>
<p><span class="math-container">$$\{h(\{a(1,1,1,1),a(1,1,2,1),a(1,2,1,1),a(1,2,2,1),a(2,1,1,1),a(2,1,2,1),a(2,2,1,1),a(2,2,2,1)\}),h(\{a(1,1,1,2),a(1,1,2,2),a(1,2,1,2),a(1,2,2,2),a(2,1,1,2),a(2,1,2,2),a(2,2,1,2),a(2,2,2,2)\})\}$$</span></p>
<p>Same ol' same ol'</p>
<p>take levels 1,2,3 -> flatten -> feed it to <code>h</code></p>
<p>Ok,let's add some brackets</p>
<pre><code>ArrayReduce[h@*MatrixForm, Array[a, {2, 2, 2, 2}], {{1}, {2}, {3}}]
</code></pre>
<p><span class="math-container">$$\left\{h\left(\left(
\begin{array}{cc}
\{a(1,1,1,1),a(1,1,2,1)\} & \{a(1,2,1,1),a(1,2,2,1)\} \\
\{a(2,1,1,1),a(2,1,2,1)\} & \{a(2,2,1,1),a(2,2,2,1)\} \\
\end{array}
\right)\right),h\left(\left(
\begin{array}{cc}
\{a(1,1,1,2),a(1,1,2,2)\} & \{a(1,2,1,2),a(1,2,2,2)\} \\
\{a(2,1,1,2),a(2,1,2,2)\} & \{a(2,2,1,2),a(2,2,2,2)\} \\
\end{array}
\right)\right)\right\}$$</span></p>
<p>That might be hard to read. Without the <code>MatrixForm</code></p>
<p><code>(* {h[{{{a[1, 1, 1, 1], a[1, 1, 2, 1]}, {a[1, 2, 1, 1], a[1, 2, 2, 1]}}, {{a[2, 1, 1, 1], a[2, 1, 2, 1]}, {a[2, 2, 1, 1], a[2, 2, 2, 1]}}}], h[{{{a[1, 1, 1, 2], a[1, 1, 2, 2]}, {a[1, 2, 1, 2], a[1, 2, 2, 2]}}, {{a[2, 1, 1, 2], a[2, 1, 2, 2]}, {a[2, 2, 1, 2], a[2, 2, 2, 2]}}}]} *)</code></p>
<p>Not much better if not worse but notice the <code>h[{{{</code> which means that <code>h</code> is fed a 3-tensor since there are 3 brackets. So with {{1},{2},{3}} <code>h</code> is fed the 3-tensor corresponding to the first three levels. Ok, last thing before we apply this to the question asked.</p>
<p>Given the discussion above what to expect from <code>ArrayReduce[h@*MatrixForm, Array[a, {2, 2, 2, 2}], {{1, 2}, {3}}]</code> ?
From the first example we see that {{something},{something}} means <code>h</code> is fed a 2-tensor or a matrix and {{something},{something},{something}} means it is fed a 3-tensor. So whatever happens, we know that <code>h</code> will be fed a matrix. Next we said that using <code>{1,2}</code> instead of <code>{{1},{2}}</code> means that levels 1 and 2 are flattened. So the result we find is :</p>
<pre><code>ArrayReduce[h@*MatrixForm, Array[a, {2, 2, 2, 2}], {{1, 2}, {3}}]
</code></pre>
<p><span class="math-container">$$\left\{h\left(\left(
\begin{array}{cc}
a(1,1,1,1) & a(1,1,2,1) \\
a(1,2,1,1) & a(1,2,2,1) \\
a(2,1,1,1) & a(2,1,2,1) \\
a(2,2,1,1) & a(2,2,2,1) \\
\end{array}
\right)\right),h\left(\left(
\begin{array}{cc}
a(1,1,1,2) & a(1,1,2,2) \\
a(1,2,1,2) & a(1,2,2,2) \\
a(2,1,1,2) & a(2,1,2,2) \\
a(2,2,1,2) & a(2,2,2,2) \\
\end{array}
\right)\right)\right\}$$</span></p>
<p>Ok, let's apply this to the original question.</p>
<h3>Applying ArrayReduce to the question.</h3>
<p>First, we might feel that all the brackets are tedious and we are not planning on flattening matrices anytime soon or at least not with <code>ArrayReduce</code>. So we can make a custom array reduce:</p>
<pre><code>arrayreduce = ArrayReduce[#1, #2, List /@ #3] &
</code></pre>
<p>Now we can obtain the result by OP:</p>
<pre><code>arrayreduce[Sqrt@Det[# . Transpose@#] &, x, {1, 2}] // AbsoluteTiming
</code></pre>
<p>Out: <code>(* {0.000292, {1.04663, 0.437045, 0.479911, 0.260814, 0.205563, 0.171896, 1.20112, 1.00502, 0.855893, 0.125758}} *)</code></p>
<p>Verify that it worked:</p>
<pre><code>Table[Sqrt[Det[x[[;; , ;; , i]] . Transpose[x[[;; , ;; , i]]]]],
{i, 1, o}] == arrayreduce[Sqrt@Det[# . Transpose@#] &,
x,
{1, 2}]
</code></pre>
<p>Compare the timing:</p>
<p><code>ArrayReduce</code>: <span class="math-container">$2.36\times 10^{-4} s$</span></p>
<p><code>Table</code>: <span class="math-container">$7.4\times 10^{-5} s$</span></p>
|
1,052,180 | <p>I need to find connected graph $G = (V, E), |V| \geq 3$ such that every power of his adjacency matrix contains zeroes.</p>
<p>I know that that graph will be path and adjacency matrix for even and odd powers would look like this (lets say for $|V| = 3$):</p>
<p>$M=
\left[ {\begin{array}{ccccc}
0 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0\\
\end{array} } \right]
$</p>
<p>$
M*M=
\left[ {\begin{array}{ccccc}
1 & 0 & 1\\
0 & 2 & 0\\
1 & 0 & 1\\
\end{array} } \right]
$</p>
<p>$
M*M*M=
\left[ {\begin{array}{ccccc}
0 & 2 & 0\\
2 & 0 & 2\\
0 & 2 & 0\\
\end{array} } \right]
$</p>
<p>$
M*M*M*M=
\left[ {\begin{array}{ccccc}
2 & 0 & 2\\
0 & 4 & 0\\
2 & 0 & 2\\
\end{array} } \right]
$
etc...</p>
<p>Zeroes and other numbers are changing their position for even/odd powers of matrix.</p>
<p>I don't know how to prove that there always will be zero for path graph with $n$ vertices for any power of his adjacency matrix. Maybe use an induction(I am not sure how to proceed with induction though)? </p>
| ml0105 | 135,298 | <p>The crux of Robert Israel's hint is that a bipartite graph has no cycle of odd length. So a walk follows a sequence of edges, where we can repeat vertices. Now suppose $G$ is bipartite and we can walk from $v_{i} \to v_{j}$, where $v_{i}, v_{j}$ are in the same partition. The proof is essentially by algorithm. We begin walking at $v_{i}$. We move along one of $v_{i}$'s edges to a vertex $v_{2}$ in the second partition. Then since $v_{2}$ isn't adjacent to any vertex in its partition, you go back to a vertex in the first partition. So following this algorithm, it must take an even number of steps to start and end in the same partition, and an odd number of steps to start and end in opposite partitions.</p>
<p>Now if there is a cycle of odd length, we can follow that cycle to get a $v_{i}, v_{j}$ walk of odd length. Because in a bipartite graph, there is no such cycle, we cannot construct a walk of odd length to two vertices in the same partition.</p>
|
4,330,991 | <p>I do understand that if:</p>
<p><span class="math-container">$a=b \Rightarrow a^2 = b^2 $</span></p>
<p>But clearly, the graph representing these two equations won't be the same. So, (correct me if I'm wrong) this would suggest that if you square both sides of the equation, you essentially get a different set of answers (or graph). What confuses me is this question from my textbook:</p>
<blockquote>
<p>Find and graph all <span class="math-container">$z$</span> such that <span class="math-container">$|z-3| = |z+2i|$</span>.</p>
</blockquote>
<p>The solution goes as such:</p>
<p><span class="math-container">$z=a+bi$</span></p>
<p><span class="math-container">$\sqrt{(a-3)^2+b^2}$</span> = <span class="math-container">$\sqrt{a^2+(b+2)^2}$</span></p>
<p>Squaring both sides then simplifying we end up with the equation:</p>
<p><span class="math-container">$6a + 4b = 5$</span></p>
<p>It proceeds to graph the equation on the complex plane.</p>
<p>How can we claim that the graph of the equation <span class="math-container">$6a + 4b = 5$</span> represents the graph of
<span class="math-container">$\sqrt{(a-3)^2+b^2}$</span> = <span class="math-container">$\sqrt{a^2+(b+2)^2}$</span>
when squaring both sides of the equation was an intermediary step? Doesn't that create extraneous solutions which ends up being graphed? Doesn't this mean that our new graph represents more solutions then what the initial equation was intended for?</p>
<p>I hope that made sense...</p>
<p>It's quite frustrating looking back at some of these concepts you thought you understood and realizing that you didn't.</p>
<p>Anyways, thanks in advance!</p>
| Lee Mosher | 26,501 | <p>If <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are general real numbers then the implication <span class="math-container">$a=b \implies a^2 = b^2$</span> is not reversible (because, of course, <span class="math-container">$(-a)^2 = a^2$</span>).</p>
<p>However, if <span class="math-container">$a,b \in [0,\infty)$</span> then that implication is indeed reversible: <span class="math-container">$a=b \iff a^2=b^2$</span>. This is just a way of saying that the function <span class="math-container">$f : [0,+\infty) \to [0,+\infty)$</span> defined by <span class="math-container">$f(x)=x^2$</span> is one-to-one.</p>
<p>Notice that in the original equation that you are trying to solve, namely <span class="math-container">$|z-3|=|z+2i|$</span>, both sides are indeed in <span class="math-container">$[0,\infty)$</span> because the absolute value of every complex number is a real number in <span class="math-container">$[0,\infty)$</span>.</p>
|
4,140,956 | <p>I'm trying to determine the order of the pole in the complex expression</p>
<p><span class="math-container">$$f(z)=\frac{1}{(1-\cos(z))^2}$$</span></p>
<p>I have determined the pole to be <span class="math-container">$z=2\pi n, n\in \mathbb{Z}$</span>.</p>
<p>However, when I use the equation <span class="math-container">$\lim\limits_{z\rightarrow 2\pi n}[(z-2\pi n)^k f(z)]$</span> with <span class="math-container">$k=1$</span>, it equals <span class="math-container">$\frac{0}{1}=0$</span> or that the function is analytical in the neighborhood. I have used L'Hôpital's rule repeatedly to obtain this result. I checked my answer with Wolfram Alpha, and it's supposed to have a pole of order <span class="math-container">$4$</span> and <span class="math-container">$z=2\pi n$</span>. Where am I going wrong?</p>
| Mark Viola | 218,419 | <p><strong>HINT:</strong></p>
<p>Note that <span class="math-container">$1-\cos(z)=2\sin^2(z/2)$</span>. Thus, <span class="math-container">$\left(1-\cos(z)\right)^2=4\sin^4(z/2)$</span></p>
<p>Can you proceed now?</p>
|
156,376 | <p>I understand that when we are doing indefinite integrals on the real line, we have $\int f(x) dx = g(x) + C$, where $C$ is some constant of integration. </p>
<p>If I do an integral from $\int f(x) dx$ on $[0,x]$, then is this considered a definite integral? Can I just leave out the constant of integration now? I am skeptical of the fact that this is a definite integral, because our value $x$ is still a variable. </p>
| Hunter | 120,472 | <p>A definite integral is nothing different from an indefinite integral but the constant, that was eliminated during the differentiation, has some definite value. For instance in indefinite integrals we have to write a C that represents all constants after the integration has been done. So in definite integrals this C has some value and can be determined by some function value for instance if (int)f(x)=g(x)+C in 0 to a limits then for calculating the C there would be a function value i.e. f(n) (n belongs to Z) would be given or some details would be given to calculate it. And for integrating definite integrals, just integrate them normally like indefinite and then put the values of limits in the answer (upper limits - lower limits) to get the answer.</p>
|
1,714,278 | <p>Given the sequence $ y_{k}=2^k\tan(\frac{\pi}{2^k})$ for k=2,3,.. prove that $ y_{k} $ is recursively produced by the algorithm:
$$ y_{k+1}=2^{2k+1}\frac{\sqrt{1+(2^{-k}y_{k})^2}-1}{y_{k}} $$ for k=2,3,...</p>
<p>I used the identity $ {\tan^2({a})}=\frac{1-\cos{(2a)}}{1+\cos{(2a)}}$ but I couldn't get it right. Any help appreciated.</p>
| Jean Marie | 305,862 | <p>Let us have a heuristic proof, i.e., try to understand how the recursive formula has been found, in order that, at the end, we can say "I understand how they have had the idea (and how simple it was)".</p>
<p>The formula that is to be established deals with tangents; thus, let us stay with tangents by centering our computations on the classical formula connecting the tangent of an angle to the tangent of the corresponding half-angle (<a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow">https://en.wikipedia.org/wiki/Tangent_half-angle_formula</a>):</p>
<p>$$T=\dfrac{2t}{1-t^2} \ \ \text{with} \ \ T:=\tan(a) \ \text{and} \ t:=\tan(a/2) \ \ \ (1)$$</p>
<p>Viewing this relationship as a quadratic equation in $t$, i.e.,</p>
<p>$$t^2T+2t-T=0 \ \ \ (2)$$</p>
<p>then expressing that $t$ is its unique positive root, we get the inverse relationship:</p>
<p>$$t=\dfrac{-1+\sqrt{1+T^2}}{T} \ \ \ (3)$$</p>
<p>It suffices now to set $t=\dfrac{y_{k+1}}{2^{k+1}}=\tan\dfrac{\pi}{2^{k+1}}$ and $T=\dfrac{y_{k}}{2^{k}}=\tan\dfrac{\pi}{2^{k}}$ (by definition of $y_k$) in (3) and the recursion formula is proven.</p>
|
335,651 | <p>I'm having trouble proving $$\left(\frac{\sin(\frac{n\theta}{2})}{\sin(\frac{\theta}{2})}\right)^2=\left|\sum_{k=1}^{|n|}e^{ik\theta}\right|^2$$ where $n\in\mathbb{Z}$ and $\theta\in\mathbb{R}$. Can anyone suggest a hint?</p>
| Community | -1 | <p>Try this-
We know the Euler's Formula-$e^{ix} = \cos x + i\sin x \ $-<a href="http://en.wikipedia.org/wiki/Euler%27s_formula" rel="nofollow">http://en.wikipedia.org/wiki/Euler%27s_formula</a>.</p>
<p>Link this to summation of sines in as arithmetic progression. Try Proving this!
From wiki:-
Sum of sines and cosines with arguments in arithmetic progression:if $\alpha\ne0$, then
\begin{align} & \sin{\varphi} + \sin{(\varphi + \alpha)} + \sin{(\varphi + 2\alpha)} + \cdots {} \\[8pt] & {} \qquad\qquad \cdots + \sin{(\varphi + n\alpha)} = \frac{\sin{\left(\frac{(n+1) \alpha}{2}\right)} \cdot \sin{(\varphi + \frac{n \alpha}{2})}}{\sin{\frac{\alpha}{2}}} \quad\hbox{and}\\[10pt] & \cos{\varphi} + \cos{(\varphi + \alpha)} + \cos{(\varphi + 2\alpha)} + \cdots {} \\[8pt] & {} \qquad\qquad \cdots + \cos{(\varphi + n\alpha)} = \frac{\sin{\left(\frac{(n+1) \alpha}{2}\right)} \cdot \cos{(\varphi + \frac{n \alpha}{2})}}{\sin{\frac{\alpha}{2}}}. \end{align} </p>
|
1,073,628 | <p>I am trying to find generating functions which will give me a power logarithm. </p>
<p>I am trying to find generating sums in the form</p>
<p>$$\sum_{n=1}^{\infty} a_n\,x^n = -\frac{\log^2(1-x)}{1-x}$$</p>
<p>or </p>
<p>$$\sum_{n=1}^{\infty} a_n\,x^n = \frac{\log^2(x)}{x}.$$</p>
<p>Something, which will return $\log^3$ in the end. </p>
<p>Help is required! </p>
<p>Thanks</p>
| Jack D'Aurizio | 44,121 | <p>We have:
$$-\log(1-x)=\sum_{n\geq 1}\frac{x^n}{n}$$
for any $x$ such that $|x|<1$, hence:
$$-\frac{\log(1-x)}{1-x}=\sum_{n\geq 1}H_n x^n.$$
and since $\frac{d}{dx}\log^2(1-x) = -2\frac{\log(1-x)}{1-x}$ we have:
$$\log^2(1-x) = 2\sum_{n\geq 1}\frac{H_n}{n+1}x^{n+1}\tag{1}.$$
Since, by partial summation:
$$\sum_{n=1}^{N}\frac{H_n}{n}= H_N^2-\sum_{n=1}^{N-1}\frac{H_n}{n+1} = H_N^2-\sum_{n=1}^{N}\frac{H_n}{n}+\sum_{n=1}^{N}\frac{1}{n^2}$$
we have:
$$ \sum_{n=1}^{N}\frac{H_n}{n}=\frac{H_N^2+H_N^{(2)}}{2}, \quad \sum_{n=1}^{N}\frac{H_{n-1}}{n}=\frac{H_N^2-H_N^{(2)}}{2}$$
so:</p>
<blockquote>
<p>$$\frac{\log^2(1-x)}{1-x}=\sum_{n\geq 1}(H_n^2-H_n^{(2)})\,x^{n}\tag{2}$$</p>
</blockquote>
<p>and:</p>
<blockquote>
<p>$$\log^3(1-x)=-3\sum_{n\geq 1}\frac{H_n^2-H_n^{(2)}}{n+1}\,x^{n+1}\tag{3}$$</p>
</blockquote>
<p>for any $x$ such that $|x|<1$.</p>
|
2,300,382 | <p>I cannot think of a non-constructible algebraic number of degree $4$ over $\Bbb Q$ so far. I wish if I can find such an example. Could some one tell me some such numbers with justification? Also I would like to know the track of working out such an example. Any help or reference would be appreciate. Thanks in advance!</p>
| sharding4 | 254,075 | <p>Take an $S_4$ extension which is the splitting field of a quartic polynmial, say $f(x)=x^4-4x+2$ with splitting field $K$. If the roots of $f(x)$ were constructible, then all the elements of $K$ would be constructible. For $G$ a Sylow $2$-subgroup of $S_4$, the fixed field of $G$, $K^{G}$ has odd degree over $\Bbb{Q}$, so the elements of $K\setminus\Bbb{Q}$ can't be constructible. From Milne, Remark 3.26, <em>Fields and Galois Theory</em>.</p>
|
789,407 | <p>If the roots of the equation $$ax^2-bx+c=0$$ lie in the interval $(0,1)$, find the minimum possible value of $abc$. </p>
<p><strong>Edit:</strong> I forgot to mention in the question that $a$, $b$, and $c$ are natural numbers. Sorry for the inconvenience.<br>
<strong>Edit 2:</strong> As Hagen von Eitzen said about the double roots not allowed, I forgot to mention that too. Extremely sorry :(</p>
<blockquote>
<p>I tried to use $D > 0$, where $D$ is the discriminant but I can't further analyze in terms of the coefficients. Thanks in advance!</p>
</blockquote>
| Hagen von Eitzen | 39,174 | <p>The discrimimnat $D=b^2-4ac$ must be positive to ensure two distinct real roots.
(If double root is not forbidden, we have $4x^2-4x+1$ with double root at $\frac12$ and $abc=16$).
Next, we must have $f(1)>0$, i.e. $$a+c>b.$$
For naturals $a,c$ we also have $ a+c\le 1+ac$ and conclude $$\tag1b\le ac.$$
If $b\le 4$ we obtain $b\le ac<\frac14b^2\le b$, contradiction. (NB: If we relax the condition that the roots be distinct, the $<$ becomes a $\le$ and instead of a contradiction we find $b=ac=4$, hence $abc=16$).
Hence $b\ge 5$ and by $(1)$
$$abc\ge b^2\ge 25.$$
The minimum is indeed attained as can be seen by making all iniequalities sharp, which gives: Either $(a,b,c)=(5,5,1)$ or $(a,b,c)=(1,5,5)$. The first of these indeed gives two roots in $(0,1)$.</p>
|
3,386,371 | <p>Find the explicit form of
<span class="math-container">$$
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n(n+2)}x^{n-1}.
$$</span></p>
<p>Let <span class="math-container">$S(x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n(n+2)}x^{n-1}$</span>. It has radius of convergence <span class="math-container">$1$</span>.</p>
<p>Let <span class="math-container">$S_1(x)=xS(x)$</span>. Then <span class="math-container">$S_1'(x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{(n+2)}x^{n-1}$</span> for <span class="math-container">$|x|<1$</span>.</p>
<p>Let <span class="math-container">$S_2(x)=x^3S_1'(x)$</span>. Then <span class="math-container">$S_2'(x)=\sum_{n=1}^{\infty}(-x)^{n-1}=\frac{x^2}{1+x}$</span>.</p>
<p>By integration, I obtained <span class="math-container">$S_1'(x)=\frac{1}{2x}-\frac{1}{x^2}+\frac{\ln (x+1)}{x^3}$</span>. Then how to obtain <span class="math-container">$S(x)$</span>? Or there is other method to do this problem?</p>
| Who am I | 687,026 | <p>Hint :</p>
<p>For large factorials you can use stirling's formula
<span class="math-container">$$n! \approx (\sqrt{2\pi}) n^{n+0.5}e^{-n}$$</span></p>
<p>But if you want to be more accurate you can use Ramanujan's factorial formula</p>
<p><span class="math-container">$$n! \approx \sqrt{\pi}\left(\frac{n}{e}\right)^n \left(8n^3+4n^2+n+\frac{1}{30}\right)^{\frac{1}{6}}$$</span></p>
|
2,120,539 | <p>Find the points of local maximum and minimun of the function:
$$f(x)=\sin^{-1}(2x\sqrt{1-x^2})~~~~;~~x\in (-1,1)$$
I know
$$f'(x)=-\frac{2}{\sqrt{1-x^2}}$$</p>
<p>How to find the local maximum and minimum? I have drawn the fig and seen the points of local maximum and minimum. But how to find then analytically?
<a href="https://i.stack.imgur.com/Z2fN6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z2fN6.jpg" alt="enter image description here"></a></p>
| lab bhattacharjee | 33,337 | <p>As suggested in the comment using my answer here in <a href="https://math.stackexchange.com/questions/1764431/solving-arcsin-left2x-sqrt1-x2-right-2-arcsin-x">Solving $\arcsin\left(2x\sqrt{1-x^2}\right) = 2 \arcsin x$</a>,</p>
<p>$$
\arcsin(2x\sqrt{1-x^2}) =\begin{cases}2\arcsin x
\;\;;-\dfrac1{\sqrt2}\le x\le \dfrac1{\sqrt2}\iff-\dfrac\pi4\le\arcsin x\le\dfrac\pi4\\
\pi - 2\arcsin x\;\;; \dfrac1{\sqrt2}< x\le 1\iff\dfrac\pi4<\arcsin x\le\dfrac\pi2\\
-\pi -2\arcsin x\;\;;-1< x \le-\dfrac1{\sqrt2}\iff-\dfrac\pi2\le\arcsin x<-\dfrac\pi4\end{cases}
$$</p>
<p>Can you take it from here?</p>
|
2,174,340 | <p>Given the function $$F(X,Y,Z) = \alpha^TXYZ$$ in which $X, Y, Z $ are matrices of size $n \times n$ and $\alpha$ is a vector of size $n \times 1$, how to compute the derivative of $F$ with respect to $Y$?</p>
<p>Actually I found some related questions but did not help.</p>
<p>Edit: if the function is of the form: $F(X,Y,Z) = \alpha^TXYZ\beta$, then based on the Matrix Cookbook, derivative is : $f' = (\alpha^T X)^T (Z\beta)^T$, but if there is no $\beta$, then the dimensions do not match.</p>
<p>Thank you,</p>
| Rodrigo de Azevedo | 339,790 | <p>Let </p>
<p>$$\rm f (X, Y, Z) := a^{\top} X Y Z$$</p>
<p>Hence,</p>
<p>$$\frac{\mathrm f (\mathrm X, \mathrm Y + h \mathrm V, \mathrm Z) - \mathrm f (\mathrm X, \mathrm Y, \mathrm Z)}{h} = \rm a^{\top} X V Z$$</p>
<p>Vectorizing,</p>
<p>$$\rm \mbox{vec} (a^{\top} X V Z) = \left( \color{blue}{Z^{\top} \otimes a^{\top} X} \right) \mbox{vec} (V)$$</p>
<p>where $\rm Z^{\top} \otimes a^{\top} X$ is the Jacobian matrix.</p>
|
849,433 | <blockquote>
<p>We have subspaces in $\mathbb R^4: $ </p>
<p>$w_1= \operatorname{sp} \left\{
\begin{pmatrix} 1\\ 1 \\ 0 \\1 \end{pmatrix} ,
\begin{pmatrix} 1\\ 0 \\ 2 \\0 \end{pmatrix},
\begin{pmatrix} 0\\ 2 \\ 1 \\1 \end{pmatrix} \right\}$,
$w_2= \operatorname{sp} \left\{
\begin{pmatrix} 1\\ 1 \\ 1 \\1 \end{pmatrix} ,
\begin{pmatrix} 3\\ 2 \\ 3 \\2 \end{pmatrix},
\begin{pmatrix} 2\\ -1 \\ 2 \\0 \end{pmatrix} \right\}$</p>
<p>Find the basis of $w_1+w_2$ and the basis of $w_1\cap w_2$.</p>
</blockquote>
<p>So in order to find the basis for $w_1+w_2$, I need to make a $4\times 6$ matrix of all the six vectors, bring it to RREF and see which vector is LD and the basis would be the LI vectors. </p>
<p>But the intersection of these 2 spans seems empty, or are they the LD vectors that I should've found before ?</p>
<p>In general, how is the intersection of subspaces defined ?</p>
| whosleon | 117,747 | <p><strong><em>Hint</em></strong>: the intersection of these two spans is NOT empty. What you need to do is find a new spanning set for $w_2$ that contains some of the vectors from the spanning set for $w_1$. The common vectors will span the intersection. Now that you have a basis for $w_1\cap w_2$, you can extend it to a basis of $w_1+w_2$ by adding on the vectors that are in $w_1$ or $w_2$ but not in $w_1\cap w_2$.</p>
|
1,661,244 | <p>If $R$ is a comutative ring with identity ring and $K$ is an ideal from it, let $R'=R/K$ and $I$ an ideal of $R$ satisfy $K\subseteq I$ and $I'$ is the coresponding ideal of $R'$ (we knew that correspondence theorem gives a certain one-to-one corespondence between the set of ideals of $R$ containing $K$ and the set of ideals of $R'$).
can you give me some examples where $I'$ is prime then $I$ is not.</p>
| martini | 15,379 | <p>Note that we have by the third isomorphism theorem
$$ R/I \cong R/K\bigm/I' $$
hence $R/I$ is a domain iff $(R/K)/I'$ is, therefore $I$ is prime iff $I'$ is.</p>
|
114,147 | <p>I have two lists let say</p>
<pre><code>listF = {{7, 2}, {2, 6}, {8, 1}, {1, 7}, {11, 8}, {6, 11}};
</code></pre>
<p>and </p>
<pre><code>newD = {{{2, 7}, {7, 9}, {9, 2}}, {{7, 2}, {2, 6}, {6, 7}}, {{7,
2}, {2, 6}, {6, 7}}, {{11, 6}, {6, 2}, {2, 11}}, {{8, 1}, {1,
7}, {7, 8}}, {{11, 1}, {1, 8}, {8, 11}}, {{1, 5}, {5, 7}, {7,
1}}, {{8, 1}, {1, 7}, {7, 8}}, {{11, 1}, {1, 8}, {8, 11}}, {{11,
8}, {8, 6}, {6, 11}}, {{11, 6}, {6, 2}, {2, 11}}, {{11, 8}, {8,
6}, {6, 11}}};
</code></pre>
<p>Question: How can I delete the parts <code>listF</code> from <code>newD</code> disregarding of the order of elements in the sub-list of <code>listF</code>. For example, I need to delete parts from <code>newD</code> that whether are in the form of <code>{2,7}</code> or <code>{7,2}</code>. I would prefer to Apply Alternative command but any solution would be appreciated.</p>
| Sumit | 8,070 | <pre><code>Table[Select[newD[[i]],
Complement[{#}, listF] != {} && Complement[{Reverse@#}, listF] != {} &]
, {i, Length[newD]}]
</code></pre>
<blockquote>
<p>{{{7, 9}, {9, 2}}, {{6, 7}}, {{6, 7}}, {{2, 11}}, {{7, 8}}, {{11,
1}}, {{1, 5}, {5, 7}}, {{7, 8}}, {{11, 1}}, {{8, 6}}, {{2,
11}}, {{8, 6}}}</p>
</blockquote>
<p><strong>Remove only one element</strong></p>
<pre><code>Table[Select[newD[[i]],Complement[{#}, listF] == {} ||
Complement[{Reverse@#}, listF] == {} &], {i, Length[newD]}];
del = %[[All, {1}]] (*choose the first repetition*)
Complement[newD[[#]], del[[#]]] & /@ Range[Length[newD]]
</code></pre>
<blockquote>
<p>{{{7, 9}, {9, 2}}, {{2, 6}, {6, 7}}, {{2, 6}, {6, 7}}, {{2, 11}, {6,
2}}, {{1, 7}, {7, 8}}, {{8, 11}, {11, 1}}, {{1, 5}, {5, 7}}, {{1,
7}, {7, 8}}, {{8, 11}, {11, 1}}, {{6, 11}, {8, 6}}, {{2, 11}, {6,
2}}, {{6, 11}, {8, 6}}}</p>
</blockquote>
|
114,147 | <p>I have two lists let say</p>
<pre><code>listF = {{7, 2}, {2, 6}, {8, 1}, {1, 7}, {11, 8}, {6, 11}};
</code></pre>
<p>and </p>
<pre><code>newD = {{{2, 7}, {7, 9}, {9, 2}}, {{7, 2}, {2, 6}, {6, 7}}, {{7,
2}, {2, 6}, {6, 7}}, {{11, 6}, {6, 2}, {2, 11}}, {{8, 1}, {1,
7}, {7, 8}}, {{11, 1}, {1, 8}, {8, 11}}, {{1, 5}, {5, 7}, {7,
1}}, {{8, 1}, {1, 7}, {7, 8}}, {{11, 1}, {1, 8}, {8, 11}}, {{11,
8}, {8, 6}, {6, 11}}, {{11, 6}, {6, 2}, {2, 11}}, {{11, 8}, {8,
6}, {6, 11}}};
</code></pre>
<p>Question: How can I delete the parts <code>listF</code> from <code>newD</code> disregarding of the order of elements in the sub-list of <code>listF</code>. For example, I need to delete parts from <code>newD</code> that whether are in the form of <code>{2,7}</code> or <code>{7,2}</code>. I would prefer to Apply Alternative command but any solution would be appreciated.</p>
| kglr | 125 | <pre><code>foo[x_] := Sequence[x, Reverse@x];
DeleteCases[newD, Alternatives @@ (foo /@ listF), 2]
</code></pre>
<blockquote>
<p>{{{7, 9}, {9, 2}}, {{6, 7}}, {{6, 7}}, {{2, 11}}, {{7, 8}}, {{11, 1}},<br>
{{1, 5}, {5, 7}}, {{7, 8}}, {{11, 1}}, {{8, 6}}, {{2, 11}}, {{8, 6}}}</p>
</blockquote>
<pre><code>fun = ## & @@ (## &[{##}, {#2, #}] & @@@ #) &;
DeleteCases[newD, Alternatives@fun@listF, 2]
</code></pre>
<blockquote>
<p>{{{7, 9}, {9, 2}}, {{6, 7}}, {{6, 7}}, {{2, 11}}, {{7, 8}}, {{11, 1}},<br>
{{1, 5}, {5, 7}}, {{7, 8}}, {{11, 1}}, {{8, 6}}, {{2, 11}}, {{8, 6}}}</p>
</blockquote>
<p><strong>Update:</strong></p>
<blockquote>
<p>Is there any way that I can delete only one elements from each sub-list in <code>newD</code></p>
</blockquote>
<p>Yes... use the fourth argument of <code>DeleteCases</code>:</p>
<pre><code>DeleteCases[#, Alternatives@fun@listF, 2, 1] & /@ newD
</code></pre>
<blockquote>
<p>{{{7, 9}, {9, 2}}, {{2, 6}, {6, 7}}, {{2, 6}, {6, 7}}, {{6, 2}, {2, 11}},<br>
{{1, 7}, {7, 8}}, {{11, 1}, {8, 11}}, {{1, 5}, {5, 7}}, {{1, 7}, {7, 8}},<br>
{{11, 1}, {8, 11}}, {{8, 6}, {6, 11}}, {{6, 2}, {2, 11}}, {{8, 6}, {6, 11}}}</p>
</blockquote>
|
587,077 | <p>Given any prime $p$. Prove that $(p-1)! \equiv -1 \pmod p$.</p>
<p>How to prove this?</p>
| Asinomás | 33,907 | <p>This is known as Wilson's theorem: (not completely, but Wilson's theorem is an if and only if while this is only an if)</p>
<p><a href="https://en.wikipedia.org/wiki/Wilson%27s_theorem" rel="nofollow">https://en.wikipedia.org/wiki/Wilson%27s_theorem</a></p>
<p>The idea is that (p-1)! is the product of an element of each residue class $\bmod p$, also since all numbers less than p are relatively prime to p each of them has a unique inverse.</p>
<p>except for p-1 and 1 each of them has an inverse different to itself, therefore the ones that aren't 1 or p-1 cancel out, and $1*(p-1)\equiv -1 \bmod p$ as desired.</p>
<p>Note: to prove that only 1 and -1 are inverses of themselves see that</p>
<p>$a\equiv a^{-1}\rightarrow a^2\equiv 1 \bmod p \rightarrow p|a^2-1\rightarrow p|(a+1)(a-1)\rightarrow p|a+1 $ or $ p|a-1\rightarrow a\equiv1$ or $a\equiv -1 \bmod p$</p>
|
315,551 | <p>So I'm going over my practice midterms (which all seem to have solutions like this one), </p>
<p><img src="https://i.stack.imgur.com/fC8Gu.png" alt="Image"></p>
<p>Can anyone help clarify this for me? I understand that you multiply by the reciprocal to get to line two. But after that I'm completely lost, I don't understand how:</p>
<p>$$x^{2} + 1 - [(x + h)^{2} + 1]$$</p>
<p>can become:</p>
<p>$$(x-(x+h))(x+x+h)$$</p>
<p>and so forth, I'm sorry if this is a stupid question the solution doesn't seem to explain it very well.</p>
| Aeolian | 58,941 | <p>This is easier to see if we work backward:</p>
<p>$$\begin{align*}(x-(x+h))(x+x+h) & = (x-(x+h))(x+(x+h)) \\
& = x^2 -(x+h)^2 \\
& = x^2 + 1 -(x+h)^2 - 1\\
& = (x^2 + 1) -[(x+h)^2 + 1]
\end{align*}
$$</p>
<p>And, as others have pointed out, the trick on the first line here is to see a difference of two squares. This is fairly simple working backward like this, but you might want to manually expand all of the terms yourself, and then simplify and factor again.</p>
|
1,063,352 | <p>$A$ and $B$ are sets and $\mathcal{F}$ is a family of sets. I'm trying to prove that</p>
<p>$\bigcap_{A \in \mathcal{F}}(B \cup A) \subseteq B \cup (\cap \mathcal{F})$</p>
<p>I start with "Let $x$ be arbitrary and let $x \in \bigcap_{A \in \mathcal{F}}(B \cup A)$, which means that $\forall C \in \mathcal{F}(x \in B \cup C)$. So, I need some set to plug in for $C$.</p>
<p>Looking at the goal, I need to prove that $x \in B \cup (\cap \mathcal{F})$, which is $x \in B \lor \forall C \in \mathcal{F}(x \in C)$. But I'm stuck here too because I need to break up the givens into cases in order to break up the goals into cases. I think.</p>
| Raymond Manzoni | 21,783 | <p>Your series is a <a href="http://mathworld.wolfram.com/LambertSeries.html" rel="nofollow">Lambert series</a> and in the case of constant coefficients $a_n$ it may be rewritten as a <a href="http://en.wikipedia.org/wiki/Theta_function" rel="nofollow">classical Jacobi theta function</a> (for $z=0$) : $$\theta_3(z,q):=\sum_{n=-\infty}^\infty q^{n^2}e^{2\pi iz}$$
using the relation $(8)$ from the initial MathWorld link with $\;q=\dfrac 15$ :
$$\sum_{n=1}^\infty \dfrac {q^n}{1+q^{2n}}=\frac{\theta_3(0,q)^2-1}4$$
This is in fact an identity from Jacobi (a proof is given in chap.$9$ of Borwein&Borwein's excellent book "Pi and the AGM").</p>
<p>It may be written too as an elliptic function as provided in relation $(2.7)$ of this paper from Stephen Milne <a href="http://arXiv.org/abs/math/0008068" rel="nofollow">"Infinite families of exact sums of squares formulas, Jacobi elliptic functions, continued fractions, and Schur functions"</a>. </p>
<p>Things may become more complicated instead of more simple but not less interesting !</p>
|
3,661,474 | <p><span class="math-container">$ h:R^{N+1} \to [0 , \infty)$</span> , <span class="math-container">$ h $</span> is measurable</p>
<p><span class="math-container">$ g:R^{N+1} \to [0 , \infty)$</span> , <span class="math-container">$ g $</span> is measurable</p>
<p><span class="math-container">$x,y \in R^N$</span></p>
<p><span class="math-container">$$h (x, x^2) h (y, y^2)= g (x+y, x^2+y^2)$$</span></p>
<p>Where <span class="math-container">$x^2$</span> is the dot product <span class="math-container">$x.x=|x|^2$</span></p>
<p>(1) Can it be shown that <span class="math-container">$h(0,0) \neq 0$</span></p>
<p>(2). what is the solution of <strong>1</strong> by only assuming <span class="math-container">$h$</span> is measurable ?</p>
<p><strong>Comment</strong> :</p>
<p>I was only able to show <span class="math-container">$h(x,x^2)=Ae^{b.x+cx^2}$</span> under 2 conditions:</p>
<ol>
<li><span class="math-container">$h$</span> is finite and measurable</li>
<li><span class="math-container">$h(x,x^2) >0 \text{ whenever $|x-a|^2<r^2$}$</span></li>
</ol>
<p>where ,<span class="math-container">$b,a \in R^N, r>0,c \in R$</span> and <span class="math-container">$A=e^{h(0,0)}$</span></p>
<p>clearly <span class="math-container">$h(a,a^2) > 0$</span></p>
<p><span class="math-container">$h(x+a,|x+a|^2)h(a,a^2)=g(x+2a,|x+a|^2+a^2)$</span></p>
<p><span class="math-container">$h(y+a,|y+a|^2)h(a,a^2)=g(y+2a,|y+a|^2+a^2)$</span></p>
<p><span class="math-container">$h(x+a,|x+a|^2)h(y+a,|y+a|^2)h^2(a,a^2)=g(x+2a,|x+a|^2+a^2)g(y+2a,|y+a|^2+a^2)$</span></p>
<p><span class="math-container">$h^2(a,a^2)g(x+y+2a,|x+a|^2+|y+a|^2)=g(x+2a,|x+a|^2+a^2)g(y+2a,|y+a|^2+a^2)$</span></p>
<p>let <span class="math-container">$f(x,x^2)=\frac{g(x+2a,|x+a|^2+a^2)}{h(a,a^2)}=\frac{g(x+2a,x^2+2x\cdot a + 2a^2)}{h(a,a^2)}$</span></p>
<p>clearly <span class="math-container">$f(0,0)=\frac{g(2a,2a^2)}{h(a,a^2)}=h(a,a^2)>0$</span></p>
<p>Also clearly <span class="math-container">$f(x,x^2)f(y,y^2)=g(x+y+2a,x^2+y^2+2a^2+2a\cdot (x+y))$</span></p>
<p>Let <span class="math-container">$G(x+y,x^2+y^2)=g(x+y+2a,x^2+y^2+2a^2+2a\cdot (x+y))$</span></p>
<p><span class="math-container">$$\text{ Therefore $f(x,x^2)f(y,y^2)=G(x+y,x^2+y^2)$ } $$</span></p>
<p>Note <span class="math-container">$$f(x,x^2)=h(x+a,|x+a|^2)$$</span></p>
<p><span class="math-container">$$f (x, x^2) f (y, y^2)= G (x+y, x^2+y^2) \tag{1}$$</span></p>
<p>pluging <span class="math-container">$y=0$</span> in <strong>1</strong> :<span class="math-container">$f(x,x^2)f(0,0)=G(x,x^2)$</span>, also <span class="math-container">$f(y,y^2)f(0,0)=G(y,y^2)$</span></p>
<p>and multiply the two equations and use <strong>1</strong> to obtain <strong>2</strong></p>
<p><span class="math-container">$$f^2(0,0)G(x+y,x^2+y^2)=G(x,x^2)G(y,y^2) \tag{2}$$</span></p>
<p>use <strong>1</strong> to obtain this two equations</p>
<p><span class="math-container">$$G(0,2x^2)=f(x,x^2)f(-x,x^2) \tag{3}$$</span></p>
<p><span class="math-container">$$G(0,2y^2)=f(y,y^2)f(-y,y^2) \tag{4}$$</span></p>
<p>for <span class="math-container">$x.y=0 $</span> and <span class="math-container">$x^2=y^2$</span>,
<span class="math-container">$$G(0,2x^2)G(0,2y^2)=f(x,x^2)f(-y,y^2)f(y,y^2)f(-x,x^2)=G(x-y,x^2+y^2)G(y-x,x^2+y^2)$$</span></p>
<p><span class="math-container">$$G(x-y,x^2+y^2)G(y-x,x^2+y^2)=f^2(0,0)f(x-y,x^2+y^2)f(y-x,x^2+y^2)=f^2(0,0)G(0,2x^2+2y^2)$$</span></p>
<p>So <span class="math-container">$G(0,2y^2)G(0,2x^2)=f^2(0,0)G(0,2x^2+2y^2) \tag{5}$</span></p>
<p>Therefore plugging <span class="math-container">$x^2=y^2$</span> into above to get :<span class="math-container">$$G^2(0,2x^2)=f^2(0,0)G(0,4x^2)\tag{6}$$</span></p>
<p>Applying 6 recursively,</p>
<p><span class="math-container">$$G^{2^{n+1}}(0,\frac{y^2}{2^{n+1}})=f^{2n+2}(0,0)G(0,y^2) \text{ for every $n \in N$} \tag{7}$$</span></p>
<p>under condition 2, <span class="math-container">$f(0,0)\neq 0$</span>, then it can be shown <span class="math-container">$f>0$</span> everywhere as done below :</p>
<p>from <strong>1</strong> , <span class="math-container">$f^2(0,0)G(x+y,x^2+y^2)=G(x,x^2)G(y,y^2)$</span></p>
<p><span class="math-container">$f^2(0,0)G(0,2x^2)=G(x,x^2)G(-x,x^2)$</span></p>
<p>from <strong>7</strong></p>
<p><span class="math-container">$$f^2(0,0)G(0,2x^2)=\frac{f^2(0,0)G^{2^{n+1}}(0,\frac{2x^2}{2^{n+1}})}{f^{2n+2}(0,0)}$$</span></p>
<p><span class="math-container">$$\frac{f^2(0,0)G^{2^{n+1}}(0,\frac{2x^2}{2^{n+1}})}{f^{2n+2}(0,0)}=G(x,x^2)G(-x,x^2)$$</span></p>
<p>By condition 2 <span class="math-container">$\lim_{n \to \infty}G(0,\frac{2x^2}{2^{n+1}})>0$</span></p>
<p>for large <span class="math-container">$n$</span> the left hand side of the above equation is greater than zero, so <span class="math-container">$G(x,x^2)>0$</span> implying <span class="math-container">$f(x,x^2)>0$</span></p>
<p><span class="math-container">$logf(x,x^2)+logf(y,y^2)=logG(x+y,x^2+y^2)$</span> and can easily be converted into cauchy functional equation</p>
<p><span class="math-container">$f_1(x,x^2)=logf(x,x^2)-logf(0,0)$</span>, so <span class="math-container">$f_1(0,0)=0$</span></p>
<p><span class="math-container">$G_1(x,x^2)=logG(x,x^2)-2logf(0,0)=logG(x,x^2)-logG(0,0)$</span>, so <span class="math-container">$G_1(0,0)=0$</span></p>
<p><span class="math-container">$f_1(x,x^2)+f_1(y,y^2)=G_1(x+y,x^2+y^2)$</span></p>
<p>plugging <span class="math-container">$y=0$</span> into above to get <span class="math-container">$f_1(x,x^2)=G_1(x,x^2)$</span></p>
<p>Now <span class="math-container">$f_1(x,x^2)+f_1(y,y^2)=f_1(x+y,x^2+y^2)$</span></p>
<p>let <span class="math-container">$n$</span> be number of components of <span class="math-container">$x$</span> ,for <span class="math-container">$i \in N$</span> let the ith components of <span class="math-container">$x,y$</span> be <span class="math-container">$x_i,y_i$</span> respectively</p>
<p>That is to say if <span class="math-container">$x= \langle a_1, a_2, \ldots, a_n \rangle $</span></p>
<p><span class="math-container">$x_i= \langle 0, a_i,0,0,0, \ldots, 0\rangle $</span>
and so on</p>
<p>for <span class="math-container">$x.y=0$</span>, <span class="math-container">$\sum_{i=1}^n x_iy_i=0$</span></p>
<p><span class="math-container">$f_1(x,x^2)=\sum_{i=1}^nf_1(x_i,x_i^2)$</span></p>
<p><span class="math-container">$f_1(y,y^2)=\sum_{i=1}^nf_1(y_i,y_i^2)$</span></p>
<p><span class="math-container">$f_1(x+y,x^2+y^2)=\sum_{i=1}^nf_1(x_i,x_i^2)+\sum_{i=1}^nf_1(y_i,y_i^2)$</span></p>
<p><span class="math-container">$f_1(\sum_{i=1}^n(x_i+y_i),\sum_{i=1}^n(x_i^2+y_i^2))=\sum_{i=1}^nf_1(x_i+y_i,x_i^2+y_i^2)$</span></p>
<p>let <span class="math-container">$u_i=x_i+y_i,v_i=x_i^2+y_i^2$</span></p>
<p><span class="math-container">$u_i,v_i$</span> can be taken as independent variables under the condition<span class="math-container">$u_i^2 \le 2v_i$</span></p>
<p><span class="math-container">$f_1(\sum_{i=1}^nu_i,\sum_{i=1}^nv_i)=\sum_{i=1}^nf_1(u_i,v_i)$</span></p>
<p>now we can swap two variables for example <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span></p>
<p>we have <span class="math-container">$f_1(u_1,v_1)+f_1(u_2,v_2)=f_1(u_1,v_2)+f_1(u_2,v_1)$</span> provided that that both <span class="math-container">$u_1^2\le 2v_1,u_1^2 \le 2v_2,u_2^2\le 2v_1,u_2^2 \le 2v_2,$</span></p>
<p>now taking <span class="math-container">$u_2=0$</span> and <span class="math-container">$v_2$</span> is constant ie <span class="math-container">$v_2=v_o$</span></p>
<p><span class="math-container">$f_1(u_1,v_1)=f_1(u_1,v_o)-f_1(0,v_o)+f_1(0,v_1)$</span></p>
<p>define <span class="math-container">$p(u_1)=f_1(u_1,v_o)-f_1(0,v_o)$</span>, <span class="math-container">$\rho(v_1)=f_1(0,v_1)$</span></p>
<p><span class="math-container">$f_1(u_1,v_1)=p(u_1)+f_1(0,v_1)$</span></p>
<p>now we can set <span class="math-container">$u_1=x_1,v_1=x_1^2$</span> because the inequality <span class="math-container">$u_1^2\le 2v_1$</span> is still satisfied</p>
<p>To get <span class="math-container">$f_1(x_1,x_1^2)=p(x_1)+\rho (x_1^2)$</span></p>
<p>Noting that <span class="math-container">$ G(0,2y^2)G(0,2x^2)=f^2(0,0)G(0,2x^2+2y^2)$</span> as derived above</p>
<p>it implies that for <span class="math-container">$d,c \ge 0,G(0,d)G(0,c)=f^2(0,0)G(0,d+c)$</span></p>
<p><span class="math-container">$logG(0,d)+logG(0,c)=2logf(0,0)+logG(0,d+c)$</span></p>
<p>Noting <span class="math-container">$logG(0,0)=2logf(0,0)$</span></p>
<p><span class="math-container">$logG_1(0,d)=logG(0,d)-logG(0,0)$</span></p>
<p><span class="math-container">$logG_1(0,d)+logG_1(0,c)=logG_1(0,d+c)$</span></p>
<p>but <span class="math-container">$f_1(0,d)=G_1(0,d)$</span></p>
<p>so <span class="math-container">$\rho (x_1^2)+\rho (y_1^2)=\rho (x_1^2+y_1^2)$</span>, which is Cauchy functional equation with solution <span class="math-container">$\rho (x_1^2)=c_1x_1^2$</span></p>
<p>because <span class="math-container">$\rho (x^2)+\rho (y^2)=\rho (x^2+y^2)$</span>, <span class="math-container">$c_i=c$</span> for all <span class="math-container">$i \in N$</span></p>
<p><span class="math-container">$f_1(x_1,x_1^2)=p(x_1)+cx_1^2$</span></p>
<p><span class="math-container">$f_1(x_1+y_1,x_1^2+y_1^2)=f_1(x_1,x_1^2)+f_1(y_1,y_1^2)$</span></p>
<p><span class="math-container">$p(x_1+y_1)+\rho(x_1^2+y_1^2)=p(x_1)+p(y_1)+\rho(x_1^2)+\rho(y_1^2)$</span></p>
<p>This means <span class="math-container">$p(x_1)+p(y_1)=p(x_1+y_1)$</span> which is also Cauchy functional equation with solution <span class="math-container">$p (x_1)=b_1x_1$</span></p>
<p>Therefore <span class="math-container">$f_1(x,y)=\sum_{i=1}^nb_ix_i+c\sum_{i=1}^nx_i^2=b.x+cx^2$</span></p>
<p>And <span class="math-container">$f(x,x^2)= Ae^{b.x+cx^2}$</span> Where <span class="math-container">$A=e^{f(0,0)}$</span></p>
| Yuri Negometyanov | 297,350 | <p><strong>HINT</strong></p>
<p>This answer does not contain a strict proof, only some details of the full solution.</p>
<p>Let
<span class="math-container">$$F(x) = f(x,x^2),\quad G(x,y)=H\left(\dfrac{1+i}2x+\dfrac{1-i}2y,\dfrac{1-i}2x +\dfrac{1+i}2y\right)\tag1$$</span>
then
<span class="math-container">$$F(x)F(y) = H(x+y,xy).\tag2$$</span></p>
<p><span class="math-container">$\color{brown}{\mathbf{Case\ 1.\ F(x)\ is\ an\ exponent\ of\ a\ power.}}$</span></p>
<p>Let <span class="math-container">$n\ge 0.$</span>
<span class="math-container">$$F(x) = e^{x^n},\tag3$$</span>
then
<span class="math-container">$$\ln(F(x)F(y)) = x^n+y^n = S_n(x+y,xy),$$</span>
where, in accordance with Littlewood and Cardogan formulas for <a href="https://mathworld.wolfram.com/SymmetricPolynomial.html" rel="nofollow noreferrer">Symmetric Polynomials</a>,
<span class="math-container">$$S_n(u,v) = \begin{vmatrix}
u & 1 & 0 & 0 & \dots & 0 \\
2v & u & 1 & 0 & \dots & 0 \\
0 & v & u & 1 & \dots & 0 \\
0 & 0 & v & u & \dots & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & 0 & \dots & u \\
\end{vmatrix}.\tag4$$</span></p>
<p>Therefore, the pair of functions
<span class="math-container">\begin{cases}
F_n(x) = e^{x^n}\\
H_n(x,y) = e^{S_n(x,y)}\tag5
\end{cases}</span>
presents the solution of <span class="math-container">$(2)$</span> in the case <span class="math-container">$(3).$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Common case.}}$</span></p>
<p>From <span class="math-container">$(2),(3),(5)$</span> should that the pair of the functions
<span class="math-container">\begin{cases}
F(\vec c,x) = A\prod\limits_{n=1}^\infty (F_n(x))^{c_n}(F_n(\,^1/_x))^{c_{-n}}\\
H(\vec c,x,y) = A^2\prod\limits_{n=1}^\infty (H_n(x,y))^{c_n}(H_n(\,^x/_y,\,^1/_y))^{c_{-n}},
\end{cases}</span>
or
<span class="math-container">\begin{cases}
F(\vec c,x) = Ae^{^{\Large\sum\limits_{n=1}^\infty (c_nx^n+c_{-n}x^{-n})}}\\
H(\vec c,x,y) = A^2e^{^{\Large\sum\limits_{n=1}^\infty (c_nS_n(x,y)+c_{-n}S_n(\,^x/_y,\,^1/_y))}},\tag6
\end{cases}</span>
presents the solutions of <span class="math-container">$(2)$</span> in the common case.</p>
<p>Finally, solutions of the functional equation
<span class="math-container">$$f(x,x^2)f(y,y^2) = G(x+y,x^2+y^2)$$</span>
can be expressed in the form of
<span class="math-container">\begin{cases}
f(x,y) = F(\vec c, \varphi(x,y))\\
G(x,y) = H\left(\vec c,\dfrac{1+i}2x+\dfrac{1-i}2y,\dfrac{1-i}2x +\dfrac{1+i}2y\Large\mathstrut\right),\tag7
\end{cases}</span>
where <span class="math-container">$\varphi(x,y)$</span> is the arbitrary function such that
<span class="math-container">$$\varphi(x,x^2) = x\tag8$$</span>
(examples: <span class="math-container">$\varphi(x,y) = x,\quad \varphi(x,y) = \sqrt[3]{xy},\quad\varphi(x,y) = x^2+x-y,\quad$</span> etc.)</p>
|
3,153,821 | <p>I'm trying to analyse a game of Mastermind and am having trouble quantifying the amount of possible game states. I know that a code has <span class="math-container">$\text{# of colors}^{\text{# of pegs per guess}}$</span> combinations (in my case that would be <span class="math-container">$6^4=1296$</span>). However, an entire board state also consists of 10 guesses. Each guess has the same amount of combinations, thus my intuition would be that the amount of total states in a game of Mastermind would be <span class="math-container">$\text{# of rows}^{\text{# of combinations per row}}$</span>. This approach yields <span class="math-container">$11^ {1296}$</span> board states which is astronomically large and I'm having a hard time believing this is true.</p>
<p>To clarify what I mean by a board state, I mean any legal state the game board can be in using the standard game rules. Having 3 empty rows, then one guess row and another 6 empty rows is not a legal board state.</p>
<p>How do I go about estimating this number?</p>
| Peter Taylor | 5,676 | <p>Other way round : <span class="math-container">$\text{# of combinations per row}^{\text{# of rows}}$</span></p>
<p>If you sum over 0 to 10 rows you get a geometric series, giving a total of <span class="math-container">$$\frac{\text{# of combinations per row}^{1+\text{# of rows}}-1} {\text{# of combinations per row}-1}$$</span></p>
|
4,253,160 | <p>I was recently taught that a subset W is a subspace of V if and only if:</p>
<ol>
<li>W is non-empty.</li>
<li>W is closed under vector addition.</li>
<li>W is closed under scalar multiplication.</li>
</ol>
<p>So we only need to prove 3 out of the 10 vector space axioms; why is this? Is it because it's redundant to prove the other axioms once those 3 specific axioms are proven?</p>
| Pirate Prentice | 965,950 | <p>As <span class="math-container">$x\rightarrow 0$</span>, the outputs of <span class="math-container">$\cos(1/x)$</span> will oscillate wildly between <span class="math-container">$-1$</span> and <span class="math-container">$1$</span>. But when multiplied by <span class="math-container">$\lvert \sin(x)\rvert$</span> that oscillation will happen at smaller and smaller amplitudes since <span class="math-container">$\sin(x)$</span> will tend to zero.</p>
|
4,253,160 | <p>I was recently taught that a subset W is a subspace of V if and only if:</p>
<ol>
<li>W is non-empty.</li>
<li>W is closed under vector addition.</li>
<li>W is closed under scalar multiplication.</li>
</ol>
<p>So we only need to prove 3 out of the 10 vector space axioms; why is this? Is it because it's redundant to prove the other axioms once those 3 specific axioms are proven?</p>
| CiaPan | 152,299 | <p>Be careful! <em>'Anything multiplied by zero is zero'</em> is certainly true, but the rule does not hold in limits. One term <em>convergent</em> to zero is not the same as one term <em>equal</em> zero. As a counterexample consider <span class="math-container">$\lim\limits_{x\to 0} \left(x\cdot\frac 1x\right)$</span> which equals <span class="math-container">$1$</span> despite the first term <span class="math-container">$x\to 0$</span>.</p>
|
2,120,194 | <blockquote>
<p>Let $K_1$ and $K_2$ be two disjoint compact sets in a metric space $(X,d).$ Show that $$a = \inf_{x_1 \in K_1, x_2 \in K_2} d(x_1, x_2) > 0.$$
Moreover, show that there are $x \in K_1$ and $y \in K_2$ such that $a = d(x,y)$.</p>
</blockquote>
<p>For the first part, suppose to the contrary that $\inf d(x_1, x_2) = 0$. Then $\epsilon$ is not a lower bound, so $d(x_1, x_2) < \epsilon$ for all $\epsilon > 0$. Since $K_1$ and $K_2$ are compact subsets of a metric space, they are closed and bounded. So, then $B(x_1, \epsilon) \cap K_2 \neq \emptyset$. Thus, $x_1$ is an adherent point to $K_2$. Since $K_2$ is closed, this means $x_1 \in K_2$, a contradiction.</p>
<p>I'm stuck on the moreover part. I tried supposing to the contrary that $d(x,y) > a$, but I did not get far. </p>
| Ennar | 122,131 | <p>Your attempt is indeed fruitful. If you define $R=\Bbb Q[x]/I$, where $I=(x^2-2)$, then $$(x+I)^2 = x^2+I = (x^2+I) - (x^2-2+I) =2+I$$ so you have solution to $x^2=2$. Note that $\Bbb Q$ is embedded in $R$ via $q\mapsto q + I$ and that this is "sort of cheating". We literally defined $R$ so it will have this root. </p>
<p>To show that there are no solutions to $x^2=3$, first note that every element of $R$ can be uniquely written as $ax + b+I$, $a,b\in\Bbb Q$. Existence follows from Euclidean division: if you have $f(x)+I\in R$, then $f(x) = q(x)(x^2-2) + r(x)$, with $\deg r \leq 1$ and $f(x)+I = r(x)+I$. Uniqueness follows from degree argument:</p>
<p>$$ax + b + I = a'x + b' + I \iff (a-a')x+(b-b')\in I$$</p>
<p>but there are no polynomials of degree $0$ or $1$ in $I$, thus $(a-a')x+(b-b)$ is zero polynomial.</p>
<p>Now,</p>
<p>$$(ax+b+I)^2 = a^2x^2+2abx+b^2 + I = 2abx+b^2+2a^2 + I$$</p>
<p>and you can easily check that system</p>
<p>\begin{align}
2ab &= 0\\
2a^2+b^2 &= 3
\end{align}</p>
<p>has no rational solutions.</p>
<p>Alternatively, you can prove that $R\cong \Bbb Q[\sqrt 2] = \{a+b\sqrt 2\mid a,b\in\Bbb Q\}$, which contains $\sqrt 2$, but does not contain $\sqrt 3$.</p>
|
4,386,952 | <p>Informally, mathematicians treat Integers like a subset of rational numbers.</p>
<p>But according to the standard, formal construction of <span class="math-container">$\mathbb{Q}$</span>, <span class="math-container">$\mathbb{Q}$</span> is an equivalence class over <span class="math-container">$\mathbb{Z} \times \mathbb{Z}^∗$</span>. So <span class="math-container">$0_Z \neq 0_Q$</span>.</p>
<p>When mathematicians freely convert between <span class="math-container">$\mathbb{Z}$</span> and <span class="math-container">$\mathbb{Q}$</span>, they are really making use of some canonical embedding <span class="math-container">$f : \mathbb{Z} \rightarrow \mathbb{Q}$</span> which maps <span class="math-container">$x$</span> to the equivalence class containing <span class="math-container">$(x, 1)$</span>.</p>
<p>Mathematicians implicitly use these sorts of embeddings all of the time, and do not spend their time fiddling with the minutia. People do not care if their "integer" <span class="math-container">$x$</span> is in <span class="math-container">$\mathbb{Z}$</span> or in <span class="math-container">$f[\mathbb{Z}]$</span>, and interchange between the two as-needed. For all intents and purposes these two sets are "equivalent".</p>
<p>Do any theorem provers handle these sorts of relationships gracefully? Are there systems/languages which support these intuitive equivalences and don't require humans to manually fiddle with and keep track of embeddings?</p>
| Átila Correia | 953,679 | <p>What does it mean that <span class="math-container">$fg$</span> is differentiable?</p>
<p>Well, this means the following limit must exist:
<span class="math-container">\begin{align*}
\lim_{x\to a}\frac{f(x)g(x) - f(a)g(a)}{x - a}
\end{align*}</span></p>
<p>Based on the given assumptions, we arrive at the relation:
<span class="math-container">\begin{align*}
\lim_{x\to a}\frac{[f(x)g(x) - f(a)g(x) + f(a)g(x) - f(a)g(a)]}{x - a} & =
\lim_{x\to a}\frac{[f(x) - f(a)]g(x) + f(a)[g(x) - g(a)]}{x - a}\\\\
& = \lim_{x\to a}\left[\frac{f(x) - f(a)}{x - a}\right]g(x)
\end{align*}</span></p>
<p>Since <span class="math-container">$f$</span> is differentiable at <span class="math-container">$a$</span> and <span class="math-container">$g$</span> is continuous at <span class="math-container">$a$</span>, the proposed limit exists and equals <span class="math-container">$f'(a)g(a)$</span>.</p>
<p>And we are done.</p>
<p>Hopefully this helps !</p>
|
39,466 | <p>I could not solve this problem:</p>
<blockquote>
<p>Prove that for a non-Archimedian field $K$ with completion $L$, $$\left\{|x|\in\mathbb R \mid x\in K\right\} =\left\{|x|\in\mathbb R \mid x\in L\right\}$$</p>
</blockquote>
<p>I considered a Cauchy sequence in $K$ with norms having limit $l$, but I could not construct an element of $K$ with norm $l$ from the sequence.</p>
<p>Will anyone please show how to prove it?</p>
| lhf | 589 | <p>Use the comparison test on $\sum |a_kx^k|$.</p>
|
529,886 | <p>In the context of learning about comparison theorem, using integrals to determine convergence and learning about exponential series (That's what $n^p$ is called right?).</p>
| Jack D'Aurizio | 44,121 | <p>Set $AB=x$ and $EB=y$. You have $BC=AF\sin\left(\pi/3+\hat{EAB}\right)$, so
$$ BC^2 = \frac{3x^2+y^2+2\sqrt{3}xy}{4}.$$
Now consider $N$ as the projection of $M$ on $AB$. We have:
$$ BM^2 = MN^2+NB^2 = AM^2 \sin^2\left(\pi/3+\hat{EAB}\right)+\left(x-AM \cos\left(\pi/3+\hat{EAB}\right)\right)^2.$$
Since $AM=\frac{AF}{2}=\frac{AE}{2}=\frac{1}{2}\sqrt{x^2+y^2}$, this leads to $BC^2=BM^2$. Since $M$ is the midpoint of $AF$, the parallel to $AB$ through $M$ intersects $BC$ in its midpoint: this gives $BM=MC$, too.</p>
<p>I strongly suspect that some tricky application of Ptolemy's and cosine theorem can overcome the trigonometrical part of this proof. </p>
|
2,791,087 | <p>I have the following density function:
$$f_{x, y}(x, y) = \begin{cases}2 & 0\leq x\leq y \leq 1\\ 0 & \text{otherwise}\end{cases}$$</p>
<p>We know that $\operatorname{cov}(X,Y) = E[(Y - EY)(X - EX)]$, therefore we need to calculate E[X] and E[Y]. </p>
<p>$$f_x(x)=\int_x^1 2\,\mathrm dy = \big[2y\big]_x^1 = 2-x, \forall x\in[0, 1]$$</p>
<p>$$E[X] = \int_0^1 x (2-x)\,\mathrm dx = \int_0^1 2x - x^2\,\mathrm dx= \left[\frac{2x^2}{2}-\frac{x^3}{3}\right]_0^1 = 1 - \frac{1}{3} = \frac23 $$</p>
<p>$$f_y(y) = \int_0^y\,\mathrm dx = \big[2x\big]_0^y = 2y, \forall y\in [0, 1]$$</p>
<p>$$E[Y] = \int_0^1 y\cdot2y\,\mathrm dy= \int_0^1 2y^2\,\mathrm dy= \left[\frac{2y^3}{3}\right]_0^1 = \frac23$$</p>
<p>However, the <strong>provided solution</strong> states that $E[X]=\dfrac13$. Have I done a mistake or is the solution wrong?</p>
<p>The continuation of the solution is: </p>
<p>$$\mathrm{cov}(X,Y) = \int_0^1\int_x^1(x-\frac 13)(y- \frac 23) \times 2\,\mathrm dy\,\mathrm dx$$</p>
<p>Where does the $\underline{2\,\mathrm dy\,\mathrm dx}$ come from?</p>
| Graham Kemp | 135,106 | <blockquote>
<p>However, the provided solution states that $E[X]=1/3$. Have I done a mistake or is the solution wrong?</p>
</blockquote>
<p>Yes. $f_X(x)=\text{correct stuff}={[2y]}_{y=x}^{y=1}\mathbf 1_{x\in(0;1)}=(2-\color{crimson}2x)\mathbf 1_{x\in(0;1)}$ $$\mathsf E(X)=\int_0^1 x(2-\color{crimson}2x)\mathsf d x = \tfrac 13$$</p>
<hr>
<blockquote>
<p>Where does the $\underline{2\,\mathrm dy\,\mathrm dx}$ come from?</p>
</blockquote>
<p>It is from the joint probability density function. $$\mathsf {Cov}(X,Y)~{=~\iint_{\Bbb R^2} (x-\mathsf E(X))~(y-\mathsf E(Y))~f_{X,Y}(x,y)~\mathsf d(x,y)\\=\int_0^1\int_x^1 (x-\tfrac 13)~(y-\tfrac 23)~2~\mathsf dy~\mathsf d x\\=\tfrac 1{36}}$$</p>
|
1,261,977 | <p>I tried doing this the same way you would find the Fourier transform for $1/(1+x^2)$ but I guess I'm having some trouble dealing with the 2x on top and I could really use some help here.</p>
| Mark Viola | 218,419 | <p>Hint: Taking the derivative with respect to $k$ of $$F(k)=\int_{-\infty}^{\infty}\frac{1}{1+x^2}e^{ikx}dx$$</p>
<p>yields</p>
<p>$$F'(k)=i\int_{-\infty}^{\infty}\frac{x}{1+x^2}e^{ikx}dx$$</p>
<p>Thus, the Fourier Transform of $\frac{2x}{1+x^2}$ is $-2i$ times the derivative with respect to $k$ of the Fourier Transform of $\frac{1}{1+x^2}$.</p>
<p>$$\mathscr{F}\left(\frac{2x}{1+x^2}\right)(k)=-2i \mathscr{F}\left(\frac{1}{1+x^2}\right)(k)$$</p>
|
1,261,977 | <p>I tried doing this the same way you would find the Fourier transform for $1/(1+x^2)$ but I guess I'm having some trouble dealing with the 2x on top and I could really use some help here.</p>
| Disintegrating By Parts | 112,478 | <p>Your function is square integrable. So the Fourier transform will be square integrable, and expressed as the Cauchy principal value
<span class="math-container">$$
\lim_{R\rightarrow\infty}\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}e^{-isx}\frac{2x}{1+x^{2}}dx \\
= \lim_{R\rightarrow\infty}\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}e^{-isx}\left(\frac{1}{x+i}+\frac{1}{x-i}\right)dx.
$$</span>
Integration by parts gives evaluation terms that vanish as <span class="math-container">$R\rightarrow\infty$</span> so that the above becomes
<span class="math-container">$$
= \frac{i}{s}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-isx}\left(\frac{1}{(x+i)^{2}}+\frac{1}{(x-i)^{2}}\right)dx.
$$</span>
The Cauchy principal value limit was removed because this last integral expression is absolutely convergent.</p>
|
1,348,099 | <p>We know that cross product gives a vector that is orthogonal to other two vectors. Let this vector denoted by $$|\vec{v} \times \vec{u}| = \vec{n}$$
Then $$\vec{n}\cdot \vec{u} = 0 $$ Everything okay up to here. Then how we choose a vector from two possible orthogonal vectors, $$\vec{n}$$ or $$\vec{-n}$$ Why following right hand rule? </p>
| Michael Burr | 86,421 | <p>Consider
$$
\int\frac{x}{mg+kx}dx.
$$
Add and subtract $mg/k$ to the numerator to get:
$$
\int\frac{x+mg/k-mg/k}{mg+kx}dx=\int\frac{x+mg/k}{mg+kx}dx-\int\frac{mg/k}{mg+kx}dx.
$$
Factoring out $\frac{1}{k}$ in the first integral and using a $u$-substitution of $u=mg+kx$ (so $du=kdx$) on the second integral, we get
$$
=\frac{1}{k}\int\frac{kx+mg}{mg+kx}dx-\frac{1}{k^2}\int\frac{mg}{u}du=\frac{1}{k}x-\frac{mg}{k^2}\ln|u|=\frac{x}{k}-\frac{mg}{k^2}\ln|mg+xk|+C.
$$</p>
|
2,344,758 | <p>Quoting from Wikipedia article on Euler's totient function theorem :---</p>
<blockquote>
<p>In general, when reducing a power of <span class="math-container">$a$</span> modulo <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$n$</span> are coprime), one needs to work modulo <span class="math-container">$φ(n)$</span> in the exponent of <span class="math-container">$a$</span>:</p>
<p>if <span class="math-container">$x ≡ y \pmod{φ(n)}$</span>, then <span class="math-container">$a^x ≡ a^y \pmod{n}$</span>.</p>
</blockquote>
<p>Is this really true generally? And how to prove that the original statement in Euler's theorem is equivalent to that?</p>
| lhf | 589 | <p>This is false when $a$ and $n$ are not coprime.</p>
<p>Take for instance $a=2$ and $n=4$. Then:</p>
<p>$3 \equiv 1 \bmod \phi(2)$ but $2^3 \equiv 0 \not\equiv 2 = 2^1 \bmod 4$.</p>
<p>On the other hand, if $(x \equiv y \bmod \phi(n) \implies a^x \equiv a^y \bmod n)$, then take $x=\phi(n)$ and $y=0$ to get Euler's theorem if $a\ne0$.</p>
|
367,497 | <p>Let $\{f_n\}$ be a sequence of $L^1(\mathbb R)$ functions converging a.e. to zero.
Does
$$
\lim_{n\to \infty} \int_{\mathbb R} \sin(f_n(x)) dx = 0?
$$</p>
<p>I think the answer is no, but I can't find a counterexample.</p>
| 23rd | 46,120 | <p>You are correct. For example, consider $f_n=\chi_{[n, n+1]}$.</p>
|
2,917,848 | <p>Given a real square matrix <span class="math-container">$A$</span> and a vector <span class="math-container">$v$</span>, Krylov subspaces are given by:
<span class="math-container">$$\mathcal K_n(A,v) := \text{span}(v, Av, \cdots A^{n-1} v)$$</span>
These spaces are known to help solve numerical linear algebra problems like approximating the eigenvalues <span class="math-container">$\lambda$</span> given by <span class="math-container">$Ax = \lambda x$</span>. Can someone explain the core idea behind this?</p>
| Ross Millikan | 1,827 | <p>The limit does not exist. If you approach along a path with $h=0$ the quantity is always zero. If you approach along $h=k$ with $t=0$ the quantity is $\sqrt {\frac z2}$</p>
|
1,701,260 | <p>My textbook states the following:<br>
i) If $ f : [a,b] \rightarrow \mathbb{R} $ is bounded and is continuous at all but finitely many points of $[a,b]$, then it is integrable on $[a,b]$.<br>
ii) Any increasing or decreasing function on $[a,b]$ is integrable on $[a,b]$.</p>
<p>The proof for (i) is clear to me. I followed the entirety of it. My issue is with (ii). Is boundedness and continuity not necessary for (ii), or are they somehow implied by being strictly increasing/decreasing? </p>
| Sangchul Lee | 9,340 | <p>Let $P = \{ a = t_0 < \cdots < t_n = b \}$ be <em>any</em> partition on $[a, b]$. If $f : [a, b] \to \Bbb{R}$ is monotone increasing, then the upper Riemann sum is</p>
<p>$$ U(P, f) = \sum_{i=1}^{n} \sup_{t \in [t_{i-1}, t_i]} f(t) (t_i - t_{i-1}) = \sum_{i=1}^{n} f(t_i) (t_i - t_{i-1}) $$</p>
<p>and likewise the lower Riemann sum is</p>
<p>$$ L(P, f) = \sum_{i=1}^{n} \inf_{t \in [t_{i-1}, t_i]} f(t) (t_i - t_{i-1}) = \sum_{i=1}^{n} f(t_{i-1}) (t_i - t_{i-1}). $$</p>
<p>Let us denote $\| P \| = \max_{1 \leq i \leq n} (t_i - t_{i-1})$ the mesh size of $P$. Taking the difference, we get</p>
<p>\begin{align*}
U(P, f) - L(P, f)
&= \sum_{i=1}^{n} (f(t_i) - f(t_{i-1})) (t_i - t_{i-1})\\
&\leq \sum_{i=1}^{n} (f(t_i) - f(t_{i-1})) \| P \| \\
&= (f(b) - f(a)) \| P \|.
\end{align*}</p>
<p>Since we can make $\| P \|$ arbitrarily small, this proves that $f$ is Riemann integrable on $[a, b]$. The argument is analogous for monotone-decreasing case.</p>
|
1,701,260 | <p>My textbook states the following:<br>
i) If $ f : [a,b] \rightarrow \mathbb{R} $ is bounded and is continuous at all but finitely many points of $[a,b]$, then it is integrable on $[a,b]$.<br>
ii) Any increasing or decreasing function on $[a,b]$ is integrable on $[a,b]$.</p>
<p>The proof for (i) is clear to me. I followed the entirety of it. My issue is with (ii). Is boundedness and continuity not necessary for (ii), or are they somehow implied by being strictly increasing/decreasing? </p>
| Eugene Zhang | 215,082 | <p><strong>Hint:</strong> Use the fact that monotone function can have at most countable number of discontinuity points. So the Lebesgue measure of discontinuity points is zero and thus Riemann integrable. </p>
|
3,913,032 | <blockquote>
<p><strong>Problem.</strong> Let <span class="math-container">$A$</span> be a non-singular <span class="math-container">$n\times n$</span> matrix and let <span class="math-container">$\Gamma=[\Gamma_1\quad\Gamma_2]$</span> be an <span class="math-container">$n\times n$</span> orthogonal matrix where <span class="math-container">$\Gamma_1$</span> is <span class="math-container">$n\times n_1$</span>, <span class="math-container">$\Gamma_2$</span> is <span class="math-container">$n\times n_2$</span> and <span class="math-container">$n=n_1+n_2$</span>. Show that <span class="math-container">$$\det(\Gamma_1^TA\Gamma_1)=\det(A)\det(\Gamma_2^TA^{-1}\Gamma_2).$$</span></p>
</blockquote>
<p><strong>My Attempts.</strong> Here we make use of the property of orthogonal matrix:
<span class="math-container">\begin{align}
\det(A)=\det(\Gamma^TA\Gamma)=\det\left(\begin{bmatrix}
\Gamma_1^T \\ \Gamma_2^T
\end{bmatrix}A\begin{bmatrix}
\Gamma_1 & \Gamma_2
\end{bmatrix}\right)=\det\left(\begin{bmatrix}
\Gamma_1^TA\Gamma_1 & \Gamma_1^TA\Gamma_2 \\ \Gamma_2^TA\Gamma_1 & \Gamma_2^TA\Gamma_2
\end{bmatrix}\right).
\end{align}</span>
Since <span class="math-container">$A$</span> is non-singular, <span class="math-container">$\Gamma_1^TA\Gamma_1$</span> is also non-singular. Thus,
<span class="math-container">\begin{align}
\det(A)=\det(\Gamma_1^TA\Gamma_1)\det\left(\Gamma_2^TA\Gamma_2-\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right).
\end{align}</span>
If the formula we want to prove is true, we would have
<span class="math-container">\begin{align}
1&=\det(\Gamma_2^TA^{-1}\Gamma_2)\cdot\det\left(\Gamma_2^TA\Gamma_2-\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right) \\
&=\det\left(\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_2-\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right).
\end{align}</span>
Nonetheless, I have no idea how to simplify the terms in the parenthesis because I only have <span class="math-container">$\Gamma_1\Gamma_1^T+\Gamma_2\Gamma_2^T=I$</span>. Hope anyone has good suggestions.</p>
| Sort of Damocles | 478,044 | <p>As noted in the comments, the radius of the cylinder is just <span class="math-container">$r$</span>. Otherwise your work appears to be correct, but you've made things harder on yourself than necessary. Note that <span class="math-container">$r^3$</span> appears in both formulas, so once you have solved for <span class="math-container">$r^3 = \frac{3S}{4\pi}$</span>, you can immediately use this expression in place of <span class="math-container">$r^3$</span> in the formula for <span class="math-container">$C$</span>, going straight to the last equality.</p>
<p>In other words:</p>
<p><span class="math-container">$$S = \frac{4}{3}\pi r^3 \Rightarrow r^3 = \frac{3S}{4\pi}$$</span>
<span class="math-container">$$C = \pi r^2 \cdot (2r) = 2\pi r^3 = 2\pi\cdot\frac{3S}{4\pi}=\frac{3S}{2} $$</span></p>
|
105,868 | <p>Let $f(x)$ be a continuous probability distribution in the plane. It is obvious that if $X$ and $X'$ are two independent random samples from $f$, then $\mathbf{E}(\|X - X'\|) \leq 2 \mathbf{E}(\|X\|)$ by the triangle inequality. Can this upper bound be made tighter if we assume that $f$ is rotationally symmetric about the origin., i.e. $f(x) = g(\|x\|)$ for some function $g$?</p>
| Pablo Shmerkin | 11,009 | <p>Under the assumption that $f$ is radial, the angle between $X$ and $X'$ is uniformly distributed in $[0,\pi]$ (since the signed angle that each of $X$ and $X'$ makes with the $x$-axis is uniformly distributed by rotational invariance, hence so is their difference). Thus the distribution of $\|X-X'\|$ is the same as the distribution of $\| (r,0)-s e^{i\alpha}\|$ where $\alpha$ is an angle chosen uniformly at random, and $r, s$ are two independent samples of the distribution $g$ (on $[0,\infty)$). Its expectation thus equals (after a little calculation)
$$
\pi^{-1}\int_0^\pi \int \int \sqrt{r^2+s^2-2rs\cos(\alpha)} g(r) g(s) dr ds d\alpha,
$$
or
$$
\pi^{-1}\int_0^\pi \int \int \sqrt{(r+s)^2-2rs(1+\cos(\alpha))} g(r) g(s) dr ds d\alpha.
$$
I suppose you could get some bound in terms of $\mathbb{E}(\|X\|)$ from here which is better than $2\mathbb{E}(\|X\|)$, but there is a much more pleasant expression for the expectation of $\mathbb{E}\|X-X'\|^2$: as before, this is
$$
\pi^{-1}\int_0^\pi \int \int r^2+s^2-2rs\cos(\alpha) g(r) g(s) dr ds d\alpha,
$$
which can be readily evaluated to $2\mathbb{E}(\|X\|^2)-4\mathbb{E}(\|X\|^2)/\pi$.</p>
|
105,868 | <p>Let $f(x)$ be a continuous probability distribution in the plane. It is obvious that if $X$ and $X'$ are two independent random samples from $f$, then $\mathbf{E}(\|X - X'\|) \leq 2 \mathbf{E}(\|X\|)$ by the triangle inequality. Can this upper bound be made tighter if we assume that $f$ is rotationally symmetric about the origin., i.e. $f(x) = g(\|x\|)$ for some function $g$?</p>
| Arthur B | 8,737 | <p>Using the pareto distribution $f(x) = \frac{\alpha}{x^{\alpha+1}}$ ($x > 1$) , the ratio $\frac{E(||X-X'||)}{E(||X||)}$ approaches a $2$ as $\alpha$ tends to 1. </p>
<p>To find such a distribution, consider that all else equal, you want to maximize the difference $||X||-||X'||$ since the angle between the two is independent. This means that you want a distribution that extends to infinity as flatly as possible.</p>
|
1,737,055 | <p>What happens if some blind person want to study math? Is there some "braille alphabet" for mathematical symbols? Are there math books, at least for undergraduate students, written for blind people?</p>
| JM97 | 301,287 | <p>Reading and writing mathematics is fundamentally different than reading and writing text. While Braille is adequate for the representation of text, it is not up to the task of representing mathematics. The two basic reasons for this are:</p>
<p><strong>Linearity</strong></p>
<p>Text is linear in nature while mathematical equations are two dimensional. What you have been reading in this text is a good example of this problem. In contrast, examine the following relatively simple equation</p>
<p>$$a = \sqrt{\frac{x^2 - y}{z}} $$</p>
<p>One will immediately notice that the equation contains a superscript and a fraction - both being two dimensional in nature. The equation could have been written in a linear form, for example:</p>
<p><strong>a = sqrt(((x super 2) - y) / z)</strong></p>
<p>For this relatively simple equation, a linear representation is adequate for reading to a blind user. But, with any increase in complexity it becomes apparent that linear representations are no longer useful. </p>
<p><strong>Note:</strong>
Making mathematics accessible to the blind is a challenging and difficult process. The computer and its range of output devices has become the foundation of numerous projects that have brought this goal closer to a reality. With I/O devices such as high-quality speech, musical tones, refreshable Braille, haptic feedback and high reliability speech input, new and effective tools will soon be on the market. Other research into direct neural connectivity, will in the future, make the picture even brighter.</p>
|
1,737,055 | <p>What happens if some blind person want to study math? Is there some "braille alphabet" for mathematical symbols? Are there math books, at least for undergraduate students, written for blind people?</p>
| MathematicianByMistake | 237,785 | <p>There exists a certain variation-or rather "enrichment"-of the Braille Alphabet, named Nemeth Braille, after its' creator, Abraham Nemeth, which is also using the standard six-dot Braille cells to create mathematical symbols.</p>
<p>I am not sure on whether it is exhaustive-that is, if <em>all</em> mathmematical expressions can be written by making use of its symbols-but I am pretty certain that is sufficient for a full undergraduate course. </p>
<p>This <a href="https://www.prcvi.org/files/braille/Nemeth-Code-Mathematics-Science-Notation-1972-Revision.pdf" rel="nofollow noreferrer">pdf file</a> contains a full version-I believe it is the latest version-of the Nemeth Code.</p>
<p>As an example, of typical mathematical statements written in Nemeth Braille check this (taken from the afformentioned file):
<br><br></p>
<p><a href="https://i.stack.imgur.com/CUOD3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUOD3.png" alt="enter image description here"></a>
In general, learning by audio lectures can also help, while the presence of many on-line material, such as in various youtube sources, even though they are not specifically made for the blind or visually impaired people, can be a great educational asset. </p>
<p>The Soviet mathematician Lev Pontryagin, mentioned in the comments, is a fine example of someone who studied and contributed greatly to mathematics while being blind since he was 14. Also, one can mention, with the danger of sounding..sacrilegeous, the example of Euler, whose productivity was not in the least affected after losing his eyesight-actually one could argue that it was increased. </p>
|
55,435 | <p>I've recently become interested in the elementary theory of groups due to Sela and Myasnikov-Kharlampovich's work with free groups. I'd like a good introduction to the field of the elementary theory of groups, and in particular I'd like a reference to contain examples of group properties that cannot be read from a group's elementary theory. For example, it seems that the statements "G is vritually abelian" or "G is hopfian" could not be expressed with first-order sentences, but I don't have enough knowledge yet to determine if such things are true. Does anyone know such a reference? Or, specifically, does someone know a proof of the fact that being hopfian (or some other group property) can't be read from a group's elementary theory? Thanks!</p>
| Jeremy Macdonald | 10,673 | <p>You might try Champetier and Guirardel's <a href="http://arxiv.org/abs/math/0401042" rel="nofollow">Limit groups as limits of free groups</a>.
It has a short section (section 5) on elementary and universal theory, though perhaps none of the "non-examples" you're looking for. It is, however, a pleasure to read and if you're interested in limit groups, Makanin-Razborov digrams, etc. I highly recommend it. </p>
|
230,154 | <p><strong>Question.</strong> Is it true that to check that a model category is right proper, it suffices to check the property for weak equivalences with fibrant codomain ? (if the domain is also fibrant, the pullback is always a weak equivalence). Or is there a close statement that I can't remember (browsing nLab did not help me) ?</p>
<p><strong>Comments.</strong> Consider the diagram $\mathbf{D}=X\rightarrow Y \leftarrow Z$ where the left-hand map is a fibration and the right-hand map a weak equivalence. Let $T=\projlim \mathbf{D}$. Choose a fibrant replacement $Y^{fib}$ for $Y$ and a trivial cofibration $Y\rightarrow Y^{fib}$. Factor the composite map $X \rightarrow Y \rightarrow Y^{fib}$ as a composite trivial cofibration-fibration. We obtain a diagram $\mathbf{E}=X^{fib}\rightarrow Y^{fib}\leftarrow Z$ such that the left-hand map is a fibration between fibrant objects and the right-hand map is a weak equivalence. Let $U=\projlim \mathbf{E}$. By hypothesis, the map $U\rightarrow X^{fib}$ is a weak equivalence. By construction, the map of diagrams $\mathbf{D} \rightarrow \mathbf{E}$ is a weak equivalence of diagrams. The map $T\rightarrow X$ is a weak equivalence iff the map $T\rightarrow U$ is a weak equivalence. What next ?</p>
<p><strong>Why.</strong> I found this cryptic remark in my notebook, and I can't remember where it comes from. The reason why I want to simplify the proof of right properness is that I have to deal with model categories where a set of generating trivial cofibrations is not known. I only know what I call a set of generating anodyne cofibrations. And the trivial fibrations which are the anodyne fibrations (i.e. having the RLP with respect to the set of generating anodyne cofibrations) which are a dual strong deformation retract. And the reason why I am interested in right properness is that I want to study right Bousfield localizations.</p>
| David White | 11,540 | <p>You asked if you can check it for less than the collection of all weak equivalences. In A.5 of Motivic Symmetric Spectra, Jardine proves a general right properness result from Corollary A.4, which is the statement that weak equivalences with fibrant codomain are preserved by pullback along a fibration. So that appears to answer your question. His proof of Corollary A.4 uses facts specific to his setting, but it's still quite general.</p>
<p>Since you mentioned maps dual to strong deformation retracts, I feel like I should also advertize Rezk's work. Rezk calls a map $f : X \to Y$ "sharp" if for each base-change of f along any map into the base Y the resulting pullback square is homotopy cartesian. These sharp maps are dual to the flat maps Hopkins invented, which appear in the appendix of the Kervaire paper and which appear in other works (e.g. Batanin-Berger) being called h-cofibrations. I think of these flat maps like strong deformation retracts, and that's why I thought you might want to hear about sharp maps. </p>
<p>In Proposition 2.2 of "Fibrations and homotopy colimits of simplicial sheaves", Rezk proves that a model category is right proper if and only if every fibration is sharp. This same result appeared in Morel Voevodsky when they were trying to prove the motivic category was right proper (it was a long proof, but this fact was crucial). Perhaps this will also help you carry out your proof.</p>
|
23,181 | <p>I have n sectors, enumerated 0 to n-1 counterclockwise. The boundaries between these sectors are infinite branches (n of them).
The sectors live in the complex plane, and for n even,
sector 0 and n/2 are bisected by the real axis, and the sectors are evenly spaced.</p>
<p>These branches meet at certain points, called junctions. Each junction is adjacent to a subset of the sectors (at least 3 of them). </p>
<p>Specifying the junctions, (in pre-fix order, lets say, starting from junction adjacent to sector 0 and 1), and the distance between the junctions, uniquely describes the tree.</p>
<p>Now, given such a representation, how can I see if it is symmetric wrt the real axis?</p>
<p>For example, n=6, the tree (0,1,5)(1,2,4,5)(2,3,4) have three junctions on the real line,
so it is symmetric wrt the real axis.
If the distances between (015) and (1245) is equal to distance from (1245) to (234),
this is also symmetric wrt the imaginary axis.</p>
<p>The tree (0,1,5)(1,2,5)(2,4,5)(2,3,4) have 4 junctions, and this is never symmetric wrt either imaginary or real axis, but it has 180 degrees rotation symmetry if the distance between the first two and the last two junctions in the representation are equal.</p>
<p>Edit: Here are all trees with 6 branches, distances 1.
<a href="http://www2.math.su.se/~per/files/allTrees.pdf" rel="nofollow">http://www2.math.su.se/~per/files/allTrees.pdf</a></p>
<p>So, given the description/representation, I want to find some algorithm to decide if it is symmetric wrt real, imaginary, and rotation 180 degrees. The last example have 180 degree symmetry.</p>
<p>Edit 2: If all length of the distances between the junctions were all the same, it is quite easy to find the reflection/rotation of a tree. The problem arises when the distances are of unequal length.</p>
<p>Notice that if I have a regular n-gon, with some non-intersecting chords, is sort of the dual to my trees. I use this in the drawing algorithm, for those that wonder.</p>
<p>That is, I create the n roots of unity (possible with some rotation), then the angle between junction (123) and (345) would be the same as for the mean of vertices 1,2,3 to the mean of vertices 3,4,5 in this n-gon.</p>
<p>The angles in the drawing is not really important, you may change the angles, but the order of the long branches should be the same, and you cannot rotate the tree.</p>
<p>EDIT 3:</p>
<p>Observe that there are many ways of drawing the trees. What I have is
an equivalence relation, T1 ~ T2 if the two trees have the same junction representation.
If S is an axis symmetry, or rotation by 180 degrees,
Then S(T1) ~ S(T2), so the notion of being the same tree is well-defined.
The question is therefore, how to determine if S(T1) ~ T1, or even better, compute S(T1).
By above, this is independent on how I draw the tree.</p>
| Justin | 6,014 | <blockquote>
<p>but the order of the long branches
should be the same, and you cannot
rotate the tree.</p>
</blockquote>
<p>So 'order' has nothing to do with the geometrical length, right? It is the depth of the tree that you are talking about?</p>
<p>It seems that the identity of a junction is its angular order (0 <= j < n), its position in the tree (using some traversal), and the quadrant of the complex plain it inhabits. It seems like the quadrant is totally determined by the angular order (j in my diagram): </p>
<p>{ { (n/4 <= j < n/2) (-/+), (j < n/4) (+/+) },
{ (n/2 <= j < 3n/4) (-/-) , (3n/4 < j < n) (+/-)} }</p>
|
3,174,339 | <p>Let <span class="math-container">$M$</span> be a <span class="math-container">$C^{\infty}$</span> manifold. Let <span class="math-container">$U$</span> be an open subset of <span class="math-container">$M$</span>. Now take a closed subset (with respect to the subspace topology on <span class="math-container">$U$</span>) <span class="math-container">$V \subseteq U$</span>.
Does is then follow that <span class="math-container">$V$</span> is closed in <span class="math-container">$M$</span>? Thank you. </p>
<p>PS I would further like to add more conditions as it appears that I may have simplified the problem too much as shown by Kavi Rama Murthy below. We further let <span class="math-container">$(U, \phi)$</span> be a local chart with <span class="math-container">$p \in U$</span>, and <span class="math-container">$V = \phi^{-1}(\overline{B(\phi(p), \varepsilon)})$</span> for <span class="math-container">$\epsilon > 0$</span> small so that <span class="math-container">$\overline{B(\phi(p), \varepsilon)} \subseteq \phi(U)$</span>. </p>
| SOFe | 314,846 | <p>It is not true over <em>multiple</em> Euclidean spaces.</p>
<p>Just consider the natural projection <span class="math-container">$f: \mathbb R \to \mathbb R^2$</span> where <span class="math-container">$f(x) = \left[\begin{matrix}x \\ 0\end{matrix}\right]$</span>. <span class="math-container">$f(S)$</span> is never open for <span class="math-container">$\emptyset \ne S \subset \mathbb R$</span>, because for any <span class="math-container">$x \in S$</span>, <span class="math-container">$f(x) + \left[\begin{matrix}0 \\ r\end{matrix}\right]$</span> is not in the open ball <span class="math-container">$B(f(x), 2r)$</span>. Illustratively, <span class="math-container">$f((0, 1))=(0,1) \times \{0\}$</span> which is obviously not open (nor closed).</p>
|
3,032,258 | <p>Assume 5 out of 100 units are defective. We pick 3 out of the 100 units at random. </p>
<p>What is the probability that exactly one unit is defective?</p>
<hr>
<p>My answer would be </p>
<p><span class="math-container">$P(\text{Defect}=1) = P(\text{Defect})\times P(\text{Not defect})\times P(\text{Not defect}) = 5/100 \times 95/99 \times 94/98$</span> </p>
<p>However, I am not sure whether or not this is correct or not. Can someone verify?</p>
| BelowAverageIntelligence | 441,199 | <p>Your answer should be <span class="math-container">$$\frac{\binom{5}{1}\binom{95}{2}}{\binom{100}{3}}$$</span> Since we want the total number of ways to choose 3 meeting the criteria over the total number of ways to choose 3 out of the 100.</p>
|
163,672 | <p>Is there a characterization of boolean functions $f:\{-1,1\}^n \longrightarrow \{-1,1\}$,
so that $\mathbf{Inf_i}[f]=\frac{1} {2}$, for all $1\leq i\leq n$? Is it known how many such functions there are? </p>
| Bjørn Kjos-Hanssen | 4,600 | <p>We are looking at functions where <em>each variable has a 50% chance of mattering</em>.
Let <span class="math-container">$c_n$</span> be the number of such functions.
I'll just prove that <span class="math-container">$4^n\le c_n$</span> (of course <span class="math-container">$c_n\le 2^{2^n}$</span>) for <span class="math-container">$n\ge 3$</span>. These will be <strong>non bent</strong> in contrast to KEW'S answer.</p>
<p>We start by working out the smallest values of <span class="math-container">$n$</span> to gain a grip on the question.</p>
<p>For <span class="math-container">$n=0$</span> there are <span class="math-container">$2$</span> (out of 2) such functions (no variable influences <span class="math-container">$\top$</span> and <span class="math-container">$\bot$</span>, but there are no variables).</p>
<p>For <span class="math-container">$n=1$</span> there are <span class="math-container">$0$</span> (out of <span class="math-container">$4$</span>) such functions (the variable <span class="math-container">$a$</span> has 0 influence on <span class="math-container">$\top$</span> and <span class="math-container">$\bot$</span> and 1 influence on <span class="math-container">$a$</span> and <span class="math-container">$\neg a$</span>).</p>
<p>For <span class="math-container">$n=2$</span> there are <span class="math-container">$8$</span> (out of <span class="math-container">$16$</span>) such functions.
(Of the 16 there are 6 that only depend on 0 or 1 many variables.
There are then eight "cognates of <span class="math-container">$a\vee b$</span>", namely
<span class="math-container">$$a\vee b,\quad\neg a\vee b,\quad a\vee\neg b,\quad\neg a\vee\neg b.$$</span>
and their negations, that all have influence <span class="math-container">$1/2$</span> in each variable.
And <span class="math-container">$a\leftrightarrow b$</span> and its negation, which have influence 1 in each variable.)</p>
<p>For <span class="math-container">$n=3$</span> there are somewhere between <span class="math-container">$16$</span> and <span class="math-container">$224$</span> out of <span class="math-container">$256$</span>. For instance there are 16 that include some of the terms of <span class="math-container">$a+b+c+1$</span>, and none of these qualify. Then there are <span class="math-container">$a\vee b\vee c$</span> and its 16 cognates, which don't qualify. But there is
<span class="math-container">$$
\boxed{a\vee (b\leftrightarrow c)}
$$</span>
which does qualify.</p>
<p>For general <span class="math-container">$n$</span>, at least you can take <span class="math-container">$a_1 \vee (a_2 + a_3 + \dots a_n)$</span> and its <span class="math-container">$2^{n+1}$</span> cognates. Moreover you can take
<span class="math-container">$$
\sum_{i\in I} a_i \vee \sum_{j\in J} a_j
$$</span>
for an arbitrary partition <span class="math-container">$\{I,J\}$</span> of <span class="math-container">$\{1,\ldots,n\}$</span> where both <span class="math-container">$I\ne\varnothing\ne J$</span>.
To count the number of such partitions, note that for a fixed element, there are <span class="math-container">$2^{n-1}-1$</span> choices of which other elements are in the same partition as it (any subset of <span class="math-container">$[n-1]$</span> except the whole set).</p>
<p>For each such partition we may take cognates, giving a total of <span class="math-container">$(2^{n-1}-1)(2^{n+1})$</span>.</p>
|
163,672 | <p>Is there a characterization of boolean functions $f:\{-1,1\}^n \longrightarrow \{-1,1\}$,
so that $\mathbf{Inf_i}[f]=\frac{1} {2}$, for all $1\leq i\leq n$? Is it known how many such functions there are? </p>
| KEW | 30,582 | <p>For a random function $f : \{-1,1\} \rightarrow \{-1,1\}$, we have that $\mathbf{Pr}[\mathbf{Inf}_i(f) = \frac12]$ is roughly $2^{-n/2}$; it's the probability that a binomial random variable with $2^{n-1}$ trials with success probability $\frac12$ has exactly $2^{n-2}$ successes. These events for $i = 1,2,\ldots,n$ don't seem too negatively correlated over a choice of random $f$, so heuristically you might expect that a $2^{-n^2/2}$ fraction or so of the $2^{2^n}$ Boolean functions satisfy your condition.</p>
<p>For something rigorous, I can tell you that there are at least $2^{2^{n/2}}$ such functions satisfying your condition. This is because the class of <a href="http://www.contrib.andrew.cmu.edu/~ryanod/?p=1039">bent functions</a> satisfy this condition. Let's think of Boolean functions as objects of the form $f : \mathbb{F}_2^n \rightarrow \mathbb{F}_2$, and take $n$ to be even. A Boolean function is a bent function if and only if $\mathbf{Pr}[f(x) \neq f(x+h)] = \frac12$ for all nonzero $h \in \mathbb{F}_2^n$. Your condition only requires this when $h$ is a standard basis vector, a vector in $\mathbb{F}_2^n$ with a single $1$.</p>
<p>The number of bent functions is quite a challenging open problem. It is known that there are at least $2^{2^{n/2}}$ functions in this class; this is the size of the class of <a href="http://www.contrib.andrew.cmu.edu/~ryanod/?p=1039#proprothaus-bent">Maiorana-McFarland bent functions</a> (this links to the same page as above; the name Maiorana-McFarland is not given there). If I recall correctly, the number of bent functions is only known to be between $2^{2^{n/2+o(1)}}$ and $2^{2^{n-o(1)}}$. </p>
<p>Disclaimer: The links above are to Ryan O'Donnell's excellent book concerning analysis of Boolean functions (which seems like what you're interested in), although it is not the best resource on bent functions.</p>
|
162,293 | <p>Consider a "curve" defined by a list of points in finite dimension (here, four):</p>
<pre><code> pts = Table[{Cos[t], 0, Sin[2 t], Sin[t]}, {t, Subdivide[0, 1, 99]}]
</code></pre>
<p>I used known functions to generate <code>pts</code> but of course I am not supposed to know the parametric equation of the curve they belong to.</p>
<p>What would be a good approach to compute the local curvature? Several possibilities I thought of:</p>
<ul>
<li>interpolating <code>pts</code> and using <code>ArcCurvature</code> (introduced in <em>Mathematica 10</em>)</li>
<li>using $n+1$ consecutive points (where $n$ is the dimension), fit the circle that passes through them: that's the osculating circle, whose radius is the opposite of the curvature.</li>
</ul>
<p>Ideally, the solution should not be too sensitive to noise...</p>
| David G. Stork | 9,735 | <p>For ${\bf x}(t) = \{ x(t), y(t), z(t) \}$ in $\mathbb{R}^3$ the curvature is:</p>
<p>$$ \kappa = {\sqrt{ (z^{\prime\prime}y^\prime + y^{\prime\prime} z^\prime)^2 + (x^{\prime\prime} z^\prime + z^{\prime\prime} x^\prime)^2 + (y^{\prime\prime} x^\prime + x^{\prime\prime} y^\prime)^2} \over ((x^\prime)^2 + (y^\prime)^2 + (z^\prime)^2)^{3/2}} $$</p>
<p>So just substitute your trigonometric functions and evaluate at the $t$ in question. The radius of the oscultating circle is of course $R = 1/\kappa$. The generalization to higher dimension would appear to be straightforward, i.e., extending to all pairs of cross terms in the numerator (under the square root) and having $n$ squares in the denominator, raised to $n/2$.</p>
|
244,333 | <p>Consider this equation : </p>
<p><span class="math-container">$$\sqrt{\left( \frac{dy\cdot u\,dt}{L}\right)^2+(dy)^2}=v\,dt,$$</span></p>
<p>where <span class="math-container">$t$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$T$</span> , and <span class="math-container">$y$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$L$</span>.
Now how to proceed ? </p>
<p>This equation arises out of following problem : </p>
<p>A cat sitting in a field suddenly sees a standing dog. To save its life, the cat runs away in a straight line with speed <span class="math-container">$u$</span>. Without any delay, the dog starts with running with constant speed <span class="math-container">$v>u$</span> to catch the cat. Initially, <span class="math-container">$v$</span> is perpendicular to <span class="math-container">$u$</span> and <span class="math-container">$L$</span> is the initial separation between the two. If the dog always changes its direction so that it is always heading directly at the cat, find the time the dog takes to catch the cat in terms of <span class="math-container">$v, u$</span> and <span class="math-container">$L$</span>.
<hr>
See my solution below : </p>
<p>Let initially dog be at <span class="math-container">$D$</span> and cat at <span class="math-container">$C$</span> and after time <span class="math-container">$dt$</span> they are at <span class="math-container">$D'$</span> and <span class="math-container">$C'$</span> respectively. Dog velocity is always pointing towards cat.</p>
<p>Let <span class="math-container">$DA = dy, \;AD' = dx$</span></p>
<p>Let <span class="math-container">$CC'=udt,\;DD' = vdt$</span> as interval is very small so <span class="math-container">$DD'$</span> can be taken straight line.</p>
<p>Also we have <span class="math-container">$\frac{DA}{DC}= \frac{AD'}{ CC'}$</span> using triangle property.</p>
<p><span class="math-container">$\frac{dy}{L}= \frac{dx}{udt}\\ dx = \frac{dy.udt}{L}$</span></p>
<p><span class="math-container">$\sqrt{(dx)^2 + (dy)^2} = DD' = vdt \\ \sqrt{(\frac{dy.udt}{L})^2 + (dy)^2} = vdt $</span></p>
<p>Here <span class="math-container">$t$</span> varies from <span class="math-container">$0-T$</span>, and <span class="math-container">$y$</span> varies from <span class="math-container">$0-L$</span>. Now how to proceed?<img src="https://i.stack.imgur.com/Ji3Fc.jpg" alt="enter image description here"></p>
| Américo Tavares | 752 | <p><em>Comment to your attempt</em>: instead of reasoning based on the initial positions and conditions of the dog and cat, I advise to sketch a <em>generic intermediate position</em> of both. I tried this approach in the sketch below. </p>
<p>I have not checked <a href="https://math.stackexchange.com/a/245537/752">your second</a> computation whose result differs from mine. I think you should explain it with some text and a sketch. </p>
<p>I arrived at the same result of Egor Skriptunoff's answer but have not tried to compute and analyze the invariant expression stated in his answer.</p>
<hr>
<p>I assume that: (a) the dog starts at the point $S=(L,0)$ and the cat at the origin $O=(0,0)$; (b) the cat moves in the positive direction along the $y$-axis, and the dog describes a <em>curve of pursuit</em> <a href="http://mathworld.wolfram.com/PursuitCurve.html" rel="noreferrer">(WolframMathWorld link)</a> $C$ in the $xy$-plane. I call $y=f(x)$ the equation of $C$. </p>
<p><img src="https://i.stack.imgur.com/xT6v9.jpg" alt="enter image description here"></p>
<ol>
<li><p>At time $t$ the tangent line to $C$ at the point $P(x,y)$ passes through the point $Q=(0,ut)$, which means that the derivative $y^{\prime }=f^{\prime}(x)=dy/dx$ is
$$
y^{\prime }=\frac{y-ut}{x}.\tag{A}
$$
Solving for $t$ we get<br>
$$
t=\frac{y-xy^{\prime }}{u}.\tag{A'}
$$</p></li>
<li><p>Let $s$ be the distance traveled by the dog from $S$ to $P$, i.e. the
length of the arc $SP$ measured along $C$. Since the <a href="http://en.wikipedia.org/wiki/Arc_length#Finding_arc_lengths_by_integrating" rel="noreferrer">arc length formula</a> is
the integral
$$
s=\int_{x}^{L}\sqrt{1+\left( f^{\prime }(\xi )\right) ^{2}}d\xi
=-\int_{L}^{x}\sqrt{1+\left( f^{\prime }(\xi )\right) ^{2}}d\xi ,
\tag{B}$$
and $s=vt$, we have
$$
t=\frac{s}{v}=-\frac{1}{v}\int_{L}^{x}\sqrt{1+\left( f^{\prime }(\xi
)\right) ^{2}}d\xi =\frac{y-xy^{\prime }}{u}.\tag{B'}
$$
Hence, equating $(A')$ to $(B')$, we get
$$
-\frac{u}{v}\int_{L}^{x}\sqrt{1+\left( f^{\prime }(\xi )\right) ^{2}}d\xi
=y-xy^{\prime }\tag{C}
$$</p></li>
<li>Differentiate both sides and simplify
$$\begin{eqnarray*}
-\frac{u}{v}\sqrt{1+\left( y^{\prime }\right) ^{2}} &=&\frac{d}{dx}\left(
y-xy^{\prime }\right) \\
-\frac{u}{v}\sqrt{1+\left( y^{\prime }\right) ^{2}} &=&y^{\prime }-\left(
y^{\prime }+xy^{\prime \prime }\right) =-xy^{\prime \prime }.
\end{eqnarray*}$$
to get the following differential equation
$$
\sqrt{1+\left( y^{\prime }\right) ^{2}}=kxy^{\prime \prime },\qquad k=\frac{v}{u}>1.\tag{D}
$$</li>
<li><p>Set $w=y^{\prime }$ and solve $(D)$ for $w$ applying the <a href="http://en.wikipedia.org/wiki/Separation_of_variables" rel="noreferrer">method of separation of variables</a>. Then
$$
\sqrt{1+w^{2}}=kxw^{\prime }=kx\frac{dw}{dx}\Leftrightarrow \frac{dw}{\sqrt{
1+w^{2}}}=\frac{dx}{kx}.\tag{E}
$$
So
$$\begin{eqnarray*}
\int \frac{dw}{\sqrt{1+w^{2}}} &=&\int \frac{dx}{kx}+C \\
\text{arcsinh }w &=&\frac{1}{k}\ln x+\ln C_{1}.\tag{F}
\end{eqnarray*}$$
The initial condition $x=L,w=y^{\prime }(L)=0$ determines the constant $C_{1}$
$$
0=\frac{1}{k}\ln L+\ln C_{1}\Rightarrow C_{1}=e^{-\frac{1}{k}\ln L}.
$$
Consequently,
$$
\text{arcsinh }w=\frac{1}{k}\ln x-\frac{1}{k}\ln L=\frac{1}{k}\ln \frac{x}{L}.\tag{G}
$$
Solve $(G)$ for $w$ and rewrite in terms of exponentials using the definition of $\sinh z=\frac{1}{2}\left( e^{z}-e^{-z}\right) $
$$
\frac{dy}{dx}=w=\sinh \left( \frac{1}{k}\ln \frac{x}{L}\right) =\frac{1}{2}\left( \left( \frac{x}{L}\right) ^{1/k}-\left( \frac{x}{L}\right)
^{-1/k}\right)\tag{H}
$$
This last differential equation is easily integrable
$$\begin{eqnarray*}
y &=&\frac{1}{2}\int \left( \frac{x}{L}\right) ^{1/k}-\left( \frac{x}{L}
\right) ^{-1/k}dx \\
&=&\frac{1}{2}\left( \frac{L}{1/k+1}\left( \frac{x}{L}\right) ^{1/k
+1}-\frac{L}{1-1/k}\left( \frac{x}{L}\right) ^{1-1/k}\right) +C
\end{eqnarray*}\tag{I}$$
Find $C$ making use of the initial condition $x=L,y=0$
$$\begin{eqnarray*}
0 &=&\frac{1}{2}\left( \frac{L}{1/k+1}\left( \frac{L}{L}\right) ^{1/k+1}-\frac{L}{1-1/k}\left( \frac{L}{L}\right) ^{1-1/k}\right) +C \\
&\Rightarrow &C=\frac{Lk}{k^{2}-1}.
\end{eqnarray*}$$
The equation of the trajectory is thus
$$
y=\dfrac{L}{2}\left( \dfrac{1}{\dfrac{1}{k}+1}\left( \dfrac{x}{L}\right) ^{\dfrac{1}{k}+1}-\dfrac{1}{1-\dfrac{1}{k}}\left( \dfrac{x}{L}\right) ^{1-\dfrac{1}{k}}\right) +\dfrac{Lk}{k^{2}-1}.\tag{J}
$$</p></li>
<li><p>To obtain the time $T$ the dog takes to catch the cat, make $x=0$ in the last equation and observe that the cat travels the distance $y=f(0)=uT$ (point $(R)$):<br>
$$
y=f(0)=\frac{Lk}{k^{2}-1}=\frac{Lv/u}{\left( v/u\right) ^{2}-1}=\frac{uv}{v^{2}-u^{2}}L=uT.\tag{K}
$$
Therefore
$$
T=L\frac{v}{v^{2}-u^{2}}.\tag{L}
$$</p></li>
</ol>
<p>--</p>
<p><em>References</em>:</p>
<p><a href="http://www.hsu.edu/uploadedFiles/Faculty/Academic_Forum/2006-7/2006-7AFPursuit.pdf" rel="noreferrer"><em>Pursuit Curves</em></a> by Michael Lloyd</p>
<p>Wikipedia Entry <a href="http://en.wikipedia.org/wiki/Pursuit_curve" rel="noreferrer"><em>Pursuit curve</em></a> </p>
<p>German Wikipedia Entry <a href="http://de.wikipedia.org/wiki/Radiodrome" rel="noreferrer"><em>Radiodrome</em></a></p>
<p><a href="http://www.math.utep.edu/classes/3226/lab2b/lab2b.html" rel="noreferrer"><em>The Curve of Pursuit</em></a> by Helmut Knaust Math 3226 Laboratory 2B </p>
<hr>
<p>ADDED. Let $M$ be the point $(L/2,0)$. We can easily verify that the total length of $C$ is equal to $\overline{SM}+\overline{MR}$.</p>
<p><img src="https://i.stack.imgur.com/a3SIh.jpg" alt="enter image description here"> </p>
|
127,322 | <p>Being a new member, I am not yet sure whether my question will be taken as a research level question (and thus, appropriate for MO). However, I have seen similar questions on MO, couple of which led me asking mine, and I seem to not be able to find many resources except discussion on FOM and MO. So, any references to resolve the question and fix my possible confusion would be appreciated.</p>
<p>As the title suggests, I want to understand the relation between $ZFC \vdash \varphi$ and $ZFC \vdash\ 'ZFC \vdash \varphi'$. Let me give my motivation (and some partial answers) asking this question so that what I'm trying to arrive at is understood.</p>
<p>We know that if $ZFC \vdash \varphi$, then $ZFC \vdash\ 'ZFC \vdash \varphi'$ for we could write down the Gödel number of the proof we have for $\varphi$ and then check that the formalized $\vdash$ relation holds. I believe even more can be checked to be true for this provability predicate (<a href="http://en.wikipedia.org/wiki/Hilbert-Bernays_provability_conditions" rel="noreferrer">Hilbert-Bernays provability conditions</a>).</p>
<p>Is the converse true in general? Not necessarily. (Just to make sure that it will be pointed out sooner if I am doing any mistakes, I will try to write down everything unnecessarily detailed using less English and more symbols!)</p>
<p>Let us assume only that $ZFC$ is consistent (However, I am not assuming the formal statement $Con(ZFC)$, that is $\ 'ZFC \nvdash \lceil 0=1 \rceil'$). Then, it is conceivable that $ZFC \vdash\ \ 'ZFC \vdash \lceil 0=1 \rceil'$ but $ZFC \nvdash 0=1$. It might be that in reality ZFC is consistent but $\omega$-inconsistent.</p>
<p>Indeed, if I am not missing a point, it is consistent to have this situation:</p>
<p>$ZFC \vdash Con(ZFC) \rightarrow Con(ZFC+\neg Con(ZFC))$ (Gödel)
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models ZFC+\neg Con(ZFC)$ (Gödel)
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models\ ZFC+\ 'ZFC \vdash \lceil 0=1 \rceil'$
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models\ ZFC+\ 'ZFC \vdash\ 'ZFC \vdash \lceil 0=1 \rceil\ '\ '$ (Soundness and the second provability condition <a href="http://en.wikipedia.org/wiki/Hilbert-Bernays_provability_conditions" rel="noreferrer">here</a>)
$ZFC \vdash Con(ZFC) \rightarrow Con(ZFC+\ 'ZFC \vdash \neg Con(ZFC)\ ')$</p>
<p>So we cannot hope to have $ZFC \vdash\ 'ZFC \vdash \varphi'$ implying $ZFC \vdash \varphi$ for an arbitrary formula without requiring an additional assumption. At least, we know this for $\varphi: 0=1$ (this is not because of the consistency argument above, but because consistency and $\omega$-inconsistency of ZFC is a possibility).</p>
<p>If you believe that ZFC's characterization of natural numbers coincides with what we have in mind and agree that ZFC should not be $\omega$-inconsistent, then you might want to throw in the assumption $Con(ZFC)$.</p>
<p>Now imagine a universe where $Con(ZFC)$ holds but all the models of ZFC is $\omega$-nonstandard and believe $\neg Con(ZFC)$. I do not know whether this scenario is even possible (which is another question I am wondering) but if it is possible, then it would be the case that $'ZFC \vdash \neg Con(ZFC)\ '$, by completeness since $\neg Con(ZFC)$ is true in all models. Then if the implication in title (or should I say, an informal version of it: $V \models ZFC \vdash \varphi$ implies $V \models \varphi$) held, then $\neg Con(ZFC)$ which contradicts our assumption that there are models at all. The point is arbitrary models of ZFC may not be sufficient to have existence of ZFC-proofs implying existence of actual proofs.</p>
<p>However, if we add a stronger assumption $\psi$ that there is an $\omega$-model, then whenever we have an arithmetic sentence $\varphi$, if</p>
<p>$ZFC \vdash\ 'ZFC \vdash \lceil \varphi \rceil'$</p>
<p>then</p>
<p>$ZFC+\psi \vdash \exists M\ \omega^M=\omega \wedge M \models ZFC+\ \varphi$</p>
<p>and because $\omega$ in the model is the real one, by taking care of quantifiers one by one we can deduce $ZFC+\psi \vdash \varphi$. Thus, existence of an $\omega$-model solves our problem for arithmetical sentences. I cannot see any reason to make this work for arbitrary sentences without strengthening the assumption. Here is a thought:</p>
<p>We know, by the reflection principle, that we can find some limit ordinal $\alpha$ such that $\varphi \leftrightarrow \varphi^{V_{\alpha}} \leftrightarrow V_{\alpha} \models \varphi$. Thus, if we could make sure somehow that $V_{\alpha}$ is a model of ZFC while we reflect $\varphi$, then we would be done. But I could not modify the proofs of reflection in such a way that this can be done and am not even sure that this could be done.</p>
<p>My question to MO is to what extent (and under which assumptions) can we get the implication in the title?</p>
<p><strong>Edit</strong>: After reading Emil Jerabek's answer, I realized I should clarify some details.</p>
<p>Firstly, I want to treat ZFC only as a formal system (meaning that if you are claiming some assumption $\psi$ does what I want, I want to have a description of how that proof would formally look. This is why I kept writing all the leftmost $ZFC \vdash$'s all the time). Then, it is clear by the above discussions that even if we could prove $ZFC \vdash \varphi$ within our system, we may not prove $\varphi$ without additional assumptions on our system.</p>
<p>One solution could be that our system satisfies the "magical" property that whenever we have $ZFC \vdash \exists x \in \omega\ \varphi(x)$, say for some arithmetic sentence, then we have $ZFC \vdash \varphi(SSS...0)$ for some numeral. This of course is not available by default setting for we know that the theory $ZFC$+ $c \in \omega$ + $c \neq 0$ + $c \neq S0$ +... is consistent if $ZFC$ is consistent. Thus, that magical property seems like an unreasonably strong assumption.</p>
<p>To make my question very precise, what I want is some assumption $\psi$ so that for some class of formulas, whenever I have $ZFC \vdash ZFC \vdash \varphi$, then $ZFC + \psi \vdash \varphi$. For arithmetic sentences, existence of an $\omega$-model is sufficient.</p>
<p>I agree that $\Sigma^0_1$ soundness should be sufficient for arithmetic sentences if what you mean by $\Sigma^0_1$ soundness is having $ZFC \vdash ZFC \vdash \varphi$ requiring (maybe even as a derivation rule, attached to our system!) $ZFC \vdash \omega \models \phi$, where $\phi$ is the translated version of $\varphi$ into the appropriate language, since I can again go through quantifier by quantifier and prove the sentence itself, that is $ZFC \vdash \varphi$.</p>
<p>However, I see no reason why $\Sigma^0_1$ soundness should be enough for arbitrary sentence $\varphi$. It seems to me that what we need is some structure for which we have the reflection property that formal truth in the structure is provably equivalent to $\varphi$ and that structure being model of all the ZFC-sentences used in the ZFC-proof of $\varphi$.</p>
<p>I believe existence of <a href="http://cantorsattic.info/Reflecting#Inaccessible_reflecting_cardinal" rel="noreferrer">$\Sigma_n$-reflecting cardinals</a> which are inaccesible is more than sufficient for sentences up to $n$ in the Levy hiearchy. By definition of those, we have the equivalence $\varphi \leftrightarrow V_{\kappa} \models \varphi$ and then provability of $\varphi$ in ZFC implies $V_{\kappa} \models \varphi$. However, I am not sure if we had to go that far. ${}{}$</p>
| Joel David Hamkins | 1,946 | <p>With regard to your sub-question,</p>
<blockquote>
<blockquote>
<p>Now imagine a universe where $\text{Con}(\text{ZFC})$ holds but all the models of $\text{ZFC}$ are $\omega$-nonstandard and believe $\neg \text{Con}(\text{ZFC})$. I do not know whether this scenario is even possible...</p>
</blockquote>
</blockquote>
<p>Indeed this is possible, assuming that
$\text{ZFC}+\text{Con}(\text{ZFC})$ is consistent. The reason is
that by the incompleteness theorem, if this theory is consistent,
then there is a model of
$\text{ZFC}+\text{Con}(\text{ZFC})+\neg\text{Con}(\text{ZFC}+\text{Con}(\text{ZFC}))$.
In this model, we have that $\text{Con}(\text{ZFC})$ holds, but
there is no model of $\text{ZFC}+\text{Con}(\text{ZFC})$, and so
all models of $\text{ZFC}$ in this model satisfy
$\neg\text{Con}(\text{ZFC})$. In particular, all models of
$\text{ZFC}$ in this model are also $\omega$-nonstandard from the
perspective of this model.</p>
|
947,191 | <p>Show that $\sum _{n=1 } ^{\infty } (n \pi + \pi/2)^{-1 } $ diverges.</p>
<p>Both the root test and the ratio test is inconclusive. Can you suggest a series for the series comparison test?</p>
<p>Thanks in advance!</p>
| Mathronaut | 53,265 | <p>$\displaystyle\frac{1}{n\cdot \pi+\frac{\pi}{2}}\ge\frac{1}{n\cdot\pi+\pi} =\frac{1}{\pi}\cdot\frac{1}{n+1}$, and since $\displaystyle\sum\frac{1}{n+1}$ diverges thus $$\sum\displaystyle\frac{1}{n\cdot \pi+\frac{\pi}{2}}$$ diverges!</p>
|
3,224,455 | <p>I derived the volume of a cone using two approaches and compared the results.</p>
<p>First I integrated a circle of radius <span class="math-container">$r$</span> over the height <span class="math-container">$h$</span> to get the expression: <span class="math-container">$$V_1=\frac{1}{3}\pi r^2 h$$</span></p>
<p>Then I considered a polygonal pyramid of infinite sides.</p>
<p>An n-sided polygon with apothem <span class="math-container">$r$</span> has an area of: <span class="math-container">$$A=nr^2\tan{\frac{180°}{n}}$$</span></p>
<p>Integrating this over the height <span class="math-container">$h$</span> gives the expression for the area of the n-sided polygonal pyramid as: <span class="math-container">$$V_2=\frac{1}{3}n\tan{\frac{180°}{n}}r^2 h$$</span></p>
<p>Equating <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> implies that: <span class="math-container">$$ \lim_{n \to \infty} \left(n\tan{\frac{180°}{n}}\right) = \pi $$</span></p>
<p>So is it true to say that: <span class="math-container">$$\infty\tan{\frac{180°}{\infty}} = \pi$$</span> </p>
<p>But: <span class="math-container">$$\tan{\frac{180°}{\infty}}=0$$</span></p>
<p>So: <span class="math-container">$$\infty (0)=\pi$$</span></p>
<p>Can anyone shed some light on this surprising result?</p>
| Ethan Bolker | 72,858 | <p>This is not a "result". <span class="math-container">$\infty$</span> is not a number. If your argument were correct you could use it this way:</p>
<p>For all positive integers <span class="math-container">$n$</span>
<span class="math-container">$$
n \times \frac{1}{n} = 1.
$$</span>
Then taking the limit as <span class="math-container">$n \to \infty$</span>
<span class="math-container">$$
\infty \times 0 = 1.
$$</span></p>
|
3,224,455 | <p>I derived the volume of a cone using two approaches and compared the results.</p>
<p>First I integrated a circle of radius <span class="math-container">$r$</span> over the height <span class="math-container">$h$</span> to get the expression: <span class="math-container">$$V_1=\frac{1}{3}\pi r^2 h$$</span></p>
<p>Then I considered a polygonal pyramid of infinite sides.</p>
<p>An n-sided polygon with apothem <span class="math-container">$r$</span> has an area of: <span class="math-container">$$A=nr^2\tan{\frac{180°}{n}}$$</span></p>
<p>Integrating this over the height <span class="math-container">$h$</span> gives the expression for the area of the n-sided polygonal pyramid as: <span class="math-container">$$V_2=\frac{1}{3}n\tan{\frac{180°}{n}}r^2 h$$</span></p>
<p>Equating <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> implies that: <span class="math-container">$$ \lim_{n \to \infty} \left(n\tan{\frac{180°}{n}}\right) = \pi $$</span></p>
<p>So is it true to say that: <span class="math-container">$$\infty\tan{\frac{180°}{\infty}} = \pi$$</span> </p>
<p>But: <span class="math-container">$$\tan{\frac{180°}{\infty}}=0$$</span></p>
<p>So: <span class="math-container">$$\infty (0)=\pi$$</span></p>
<p>Can anyone shed some light on this surprising result?</p>
| Community | -1 | <p>Generally speaking, <span class="math-container">$\infty\cdot0$</span> has no precise value (not counting that it is a mathematically "illegal" expression).</p>
<p>For more rigor, you can work with limits and write</p>
<p><span class="math-container">$$\infty\cdot0=\lim_{n\to \infty}f(n)g(n)$$</span> where <span class="math-container">$\lim_{n\to \infty}f(n)=\infty$</span> and <span class="math-container">$\lim_{n\to \infty}g(n)=0$</span>.</p>
<p>Then you have the following examples:</p>
<ul>
<li><p><span class="math-container">$f(n)=n$</span> and <span class="math-container">$g(n)=\tan\dfrac1n\implies \infty\cdot0=1,$</span></p></li>
<li><p><span class="math-container">$f(n)=n^2$</span> and <span class="math-container">$g(n)=\tan\dfrac1n\implies \infty\cdot0=\infty,$</span></p></li>
<li><p><span class="math-container">$f(n)=n$</span> and <span class="math-container">$g(n)=\tan\dfrac1{n^2}\implies \infty\cdot0=0,$</span></p></li>
<li><p><span class="math-container">$f(n)=n^3$</span> and <span class="math-container">$g(n)=\dfrac1n-\tan\dfrac1n\implies \infty\cdot0=-\dfrac13$</span></p></li>
<li><p><span class="math-container">$\cdots$</span></p></li>
</ul>
<hr>
<p>In your case, </p>
<p><span class="math-container">$$\tan°\frac{180°}n=\tan\frac\pi n$$</span> where the second tangent function has its argument expressed in radians. Then for small arguments,</p>
<p><span class="math-container">$$\tan\frac\pi n\approx \frac\pi n$$</span></p>
<p>so that</p>
<p><span class="math-container">$$n\tan\frac\pi n\approx \pi$$</span> and this is exact in the limit.</p>
|
760,032 | <p>Consider the integral
\begin{equation}
I(x)= \frac{1}{\pi} \int^{\pi}_{0} \sin(x\sin t) \,dt
\end{equation}
show that
\begin{equation}
I(x)= \frac{2x}{\pi} +O(x^{3})
\end{equation}
as $x\rightarrow0$.</p>
<p>=> I Have used the expansion of McLaurin series of $I(x)$ but did not work.
please help me.</p>
| Jason | 130,776 | <p>Just appeal to the Taylor expansion of $I(x)$ directly; clearly $I(0)=0$. Now,
$$ I'(x) = \frac{1}{\pi}\int_0^\pi \cos(x\sin{t})\sin{t}\,dt. $$
So,
$$ I'(0) = \frac{1}{\pi} \int_0^\pi \sin{t}\,dt = \frac{2}{\pi}. $$
Also,
$$ I''(x) = -\frac{1}{\pi}\int_0^\pi \sin(x\sin{t})\sin^2{t}\,dt. $$
So, $I''(0)=0$.</p>
<p>Therefore,
$$ I(x) = I(0) + I'(0)x + \frac{I''(0)}{2}x^2 + O(x^3) = \frac{2x}{\pi} + O(x^3). $$</p>
|
2,512,137 | <blockquote>
<p>A social worker has 77 days to make his visits. He wants to make at least one visit a day, and has 133 visits to make. Is there a period of consecutive days in which he makes a.) 21 b.) 23 visits? Why?</p>
</blockquote>
<p>a.) Set $a_i$ to be the number of visits up to and including day $i$, for $i = 1,\dots, 77$. Then if we combine the set of all $a_i$ with the set $\{ a_1+21,a_2+21,\dots,a_{77}+21 \}$, then we have $77\cdot2=154$ numbers, less than or equal to $133+21 = 154$. Thus we see by the pigeonhole principle that we might have that not any of these 154 numbers to be identical. So there isn't necessarily a period of 21 days.</p>
<p>b.) No, since $23>21$, it follows from a.) that this wouldn't work.</p>
<p>Is this correct thinking? Being somewhat superstitious, I feel that at least one should be correct.. Thanks</p>
| Joffan | 206,402 | <p>Consider the cumulative number of visits $a_i$ after day $i$, taken $\bmod 21$. We can also include $a_0=0$ here. Then by the generalized pigeonhole principle there is some residue class with at least $\lceil 78/21\rceil =4$ values in it. Furthermore, if there are no classes with $5$ or more values then there are at least $78-3\times 21 = 15$ such residue classes which have $4$ values. </p>
<p>Now since $133<7\times 21$, any residue class with $5$ or more values must have $2$ of those values separated by only $21$. For the values $126$ to $133$, $8$ values in all, these can support $4$ residue values without having values separated by only $21$, but we would then have another $7$ cases of $4$ values in a residue class with only $6$ distinct values in range, giving that these must have values separated by only $21$.</p>
<p>For $23$ the argument is similar but simpler. Taking the values $a_i \bmod 23$, we see that there must be some residue classes with $4$ or more values in, and since $133< 6\times 23$ such residue class sets must include values that are only $23$ apart.</p>
|
3,506,316 | <p>I am trying to evaluate this limit:</p>
<p><span class="math-container">$$\lim_{x\to0^{+}}(x-\sin x)^{\frac{1}{\log x}}$$</span></p>
<p>It's a <span class="math-container">$0^0$</span> intedeterminate form, and I am unsure how to deal with it. I have a feeling that if I could turn it to a form where L'Hopital's rule is applicable, then I'd have a chance at solving the problem.</p>
<p>Is there any consistent way of turning a <span class="math-container">$0^0$</span> form into a <span class="math-container">$\frac{\infty}{\infty}$</span> or <span class="math-container">$\frac{0}{0}$</span> form?</p>
<p>If not, how do you deal with this kind of limits?</p>
| Ryan Shesler | 585,375 | <p>Try converting into an indeterminant product using exponentiation by <span class="math-container">$e$</span>:</p>
<p><span class="math-container">$$\lim_{x \to 0^+} (x - \sin x)^{\frac{1}{\log x}} = \lim_{x \to 0^+} e^{\ln(x - \sin x)(\frac{1}{\log x})}$$</span> and you can go from here using L'Hopital</p>
|
1,532,275 | <p>The kernel of a monoid homomorphism $f : M \to M'$ is the submonoid $\{m \in M : f(m)=1\}$. (This should not be confused with the kernel pair, which is often also named the kernel.)</p>
<p><em>Question.</em> Which submonoids $N$ of a given monoid $M$ arise as the kernel of a monoid homomorphism? (If necessary, let us assume that $M$ is commutative.)</p>
<p>Here is a necessary condition: If $xy \in N$, then $x \in N \Leftrightarrow y \in N$.</p>
| slader.com | 290,016 | <p>$$\lim_{n\to \infty}\sum_{k=1}^n \frac{1}{ k (k + 1) (k + 2) \cdots (k + m)
}=\lim_{n\to \infty}\dfrac{1}{m}\sum_{k=1}^n \frac{(k+m)-(k)}{ k (k + 1) (k + 2) \cdots (k + m)
}$$</p>
<p>$$=\lim_{n\to \infty}\dfrac{1}{m}\sum_{k=1}^n \frac{1}{ k (k + 1) (k + 2) \cdots (k + m-1)
}-\frac{1}{ (k + 1) (k + 2) \cdots (k + m)
}$$</p>
<p>Lot of terms will cancel, and you will be left with </p>
<p>$$=\lim_{n\to \infty}\dfrac{1}{m} \left[\frac{1}{ 1 (1 + 1) (1 + 2) \cdots (1 + m-1)
}-\frac{1}{ (n + 1) (n + 2) \cdots (n + m)
}\right] = \dfrac{1}{m\cdot m!}$$</p>
|
1,532,275 | <p>The kernel of a monoid homomorphism $f : M \to M'$ is the submonoid $\{m \in M : f(m)=1\}$. (This should not be confused with the kernel pair, which is often also named the kernel.)</p>
<p><em>Question.</em> Which submonoids $N$ of a given monoid $M$ arise as the kernel of a monoid homomorphism? (If necessary, let us assume that $M$ is commutative.)</p>
<p>Here is a necessary condition: If $xy \in N$, then $x \in N \Leftrightarrow y \in N$.</p>
| Jan Eerland | 226,665 | <p>HINT:</p>
<p>$$\lim_{n\to \infty}\sum_{k=1}^n\frac{1}{k(k+1)(k+2)\cdots(k+m)}=$$
$$\lim_{n\to \infty}\sum_{k=1}^n\frac{1}{\prod_{n=0}^{m} (k+n)}=$$
$$\lim_{n\to \infty}\sum_{k=1}^n\frac{1}{\frac{\Gamma(k+m+1)}{\Gamma(k)}}=$$
$$\lim_{n\to \infty}\sum_{k=1}^n\frac{\Gamma(k)}{\Gamma(k+m+1)}=$$
$$\lim_{n\to \infty}\frac{\frac{1}{\Gamma(m)}-\frac{m\Gamma(1+n)}{\Gamma(1+m+n)}}{m^2}=\frac{\frac{1}{\Gamma(m)}}{m^2}=\frac{1}{m^2\Gamma(m)}=\frac{1}{mm!}$$</p>
|
3,244,193 | <p>Here's what I did: <span class="math-container">$$\lim_{n\rightarrow +\infty}(2+3^n)^{\frac{1}{2n}}=\lim_{n\rightarrow +\infty}e^{\frac{1}{2n}\ln(2+3^n)}$$</span>
What should I do next in order to solve it?</p>
| Parcly Taxel | 357,390 | <p>In the natural logarithm, <span class="math-container">$3^n$</span> dominates <span class="math-container">$2$</span>, so we can take it out:
<span class="math-container">$$=\lim_{n\to\infty}e^{\ln(3^n)/(2n)}=\lim_{n\to\infty}e^{(n\ln3)/(2n)}=\lim_{n\to\infty}e^{(\ln3)/2}=\sqrt3$$</span></p>
|
3,244,193 | <p>Here's what I did: <span class="math-container">$$\lim_{n\rightarrow +\infty}(2+3^n)^{\frac{1}{2n}}=\lim_{n\rightarrow +\infty}e^{\frac{1}{2n}\ln(2+3^n)}$$</span>
What should I do next in order to solve it?</p>
| Peter Szilas | 408,605 | <p><span class="math-container">$3^{(1/2)}((2/3^n)+1)^{(1/(2n))}$</span>.</p>
<p>For <span class="math-container">$n$</span> large enough: <span class="math-container">$2/(3^n) <1$</span>:</p>
<p><span class="math-container">$\small { √3 \lt √3(2/3^n+1)^{(1/(2n))} \lt √3(√2)^{(1/n)}}$</span>.</p>
<p>Squeeze.</p>
|
4,246,726 | <p>For the system of linear equations <span class="math-container">$Ax = b$</span> with <span class="math-container">$b =\begin{bmatrix}
4\\
6\\
10\\
14
\end{bmatrix}\\
$</span>. The set of solutions is given by- <span class="math-container">$\left\{ x : x = \begin{bmatrix}
0\\
0\\
-2
\end{bmatrix} + c \begin{bmatrix}
0\\
1\\
0\end{bmatrix} + d \begin{bmatrix}
1\\
0\\
1\end{bmatrix} \right\}$</span>.</p>
<p>The question requires to find the matrix <span class="math-container">$A$</span> and dimensions of all four fundamental subspaces of <span class="math-container">$A$</span>. Is there any intuitive way of finding the matrix <span class="math-container">$A$</span>, since the remaining problem becomes straightforward thereafter. Thanks in advance for any help.</p>
| Ben Grossmann | 81,360 | <p>What you have computed is really <span class="math-container">$\frac{\delta \operatorname{vec}(Y^TY)}{\delta \operatorname{vec}(Y)}$</span>; I will assume this is what you're really after. I will also assume that you are using the <a href="https://en.wikipedia.org/wiki/Row-_and_column-major_order" rel="nofollow noreferrer">column-major</a> vectorization operator. To correct your mistake, we have the following:
<span class="math-container">$$
\begin{align}
\delta\operatorname{vec}(Y^TY)
&=\operatorname{vec}(\delta Y^TY) + \operatorname{vec}(Y^T\delta Y)
\\
&= (Y^T \otimes I)\operatorname{vec}(\delta Y^T) + (I \otimes Y^T)\operatorname{vec}(\delta Y)
\\ &=
(Y^T \otimes I)K\operatorname{vec}(\delta Y) + (I \otimes Y^T )\operatorname{vec}(\delta Y)
\\ &=
[(Y^T \otimes I)K + (I \otimes Y^T)]\operatorname{vec}(\delta Y),
\end{align}
$$</span>
where <span class="math-container">$K$</span> is the <a href="https://en.wikipedia.org/wiki/Commutation_matrix" rel="nofollow noreferrer">commutation matrix</a> of the correct size. With that, we find that
<span class="math-container">$$
\frac{\delta \operatorname{vec}(Y^TY)}{\delta \operatorname{vec}(Y)}= (Y^T \otimes I)K + (I \otimes Y^T).
$$</span></p>
|
72,613 | <p>Given a list or string, how do I get a list of all (contiguous) sublists/substrings? The order is not important.</p>
<p>Example for lists:</p>
<pre><code>list = {1, 2, 3};
sublists[list]
(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *)
</code></pre>
<p>Example for strings:</p>
<pre><code>string = "abc";
substrings[string]
(* {"", "", "", "", "a", "b", "c", "ab", "bc", "abc"} *)
</code></pre>
| Martin Ender | 2,305 | <p><a href="https://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it" rel="nofollow noreferrer">TMTOWTDI</a> applies to both of these problems. Below I present an overview of various approaches I've come across, followed by timing data obtained in 10.4 on Windows 10 (the timing code is available as well, so you can easily rerun the tests on your own machine if you have a different setup). Which solution is best for you depends both on the problem size as well as which ordering of the sublists or substrings you're looking for.</p>
<p>One note up front: several implementations use the 10.0 function <code>Catenate</code>. If you're using an older version, this can be replaced with <code>Join @@</code> which is a bit slower but probably won't change the overall performance ordering of the solutions.</p>
<h2>Sublists</h2>
<ol>
<li><p>These can be generated very concisely with <code>ReplaceList</code> and an appropriate pattern.</p>
<pre><code>sublists1[list_List] := ReplaceList[list, {___, sub___, ___}:>{sub}]
</code></pre>
<p>To omit empty sublists, simply replace <code>sub___</code> by <code>sub__</code>.</p></li>
<li><p>Alternatively, just generate them explicitly from start and end indices, using <code>Table</code>. You'll need to <code>Catenate</code> the result though, or you'll get the sublists grouped by starting index:</p>
<pre><code>sublists2[list_List] := Catenate @ Table[
list[[i ;; j]],
{i, Length@list + 1},
{j, i-1, Length@list}
]
</code></pre>
<p>To omit empty sublists, let <code>j</code> start from <code>i</code> instead of <code>i-1</code>.</p></li>
<li><p>As of 10.1 there is <code>SequenceCases</code> whose <code>Overlaps</code> option can be used to get all the sublists:</p>
<pre><code>sublists3[list_List] := SequenceCases[list, {___}, Overlaps -> All]
</code></pre>
<p>To omit empty sublists, use <code>{__}</code> instead of <code>{___}</code>. <a href="https://mathematica.stackexchange.com/a/111771/2305">Credits for this approach go to RunnyKine</a>.</p></li>
<li><p>Instead of using <code>Table</code> we can also construct the index pairs of each sublist from <code>Subsets</code>. However, this only gives one instance of the empty list, so we'll need to add the others manually:</p>
<pre><code>sublists4[list_List] := With[{len = Length@list},
Join[
ConstantArray[{}, len],
Take[list, #] & /@ Subsets[Range@len, 2]
]
]
</code></pre>
<p>To omit empty sublists, ditch <code>Join</code> and <code>ConstantArray</code> and replace <code>2</code> with <code>{1,2}</code>. <a href="https://mathematica.stackexchange.com/a/72618/2305">Credits for this approach go to Kuba.</a></p></li>
<li><p>As of 10.4 there's even a built-in for this task, but for some reason it also returns only one copy of the empty list:</p>
<pre><code>sublists5[list_List] := ConstantArray[{}, Length@list]~Join~Subsequences[list]
</code></pre>
<p>To omit empty sublists, ditch <code>Join</code> and <code>ConstantArray</code> and give <code>Subsequences</code> an <em>nspec</em> of <code>{1,Infinity}</code>.</p></li>
<li><p>You can also collect the results of using <code>Partition</code> with overlaps and all possible sublist lengths:</p>
<pre><code>sublists6[list_List] := Catenate @ Table[
Partition[list, n, 1],
{n, 0, Length@list}
]
</code></pre>
<p>To omit empty sublists, simply remove the <code>0,</code>.</p></li>
</ol>
<p>When performance is not a concern, the most important distinguishing feature of these is the order of the returned sublists:</p>
<pre><code>list = {1, 2, 3};
sublists1@list (* ReplaceList *)
sublists2@list (* Table *)
sublists3@list (* SequenceCases *)
sublists4@list (* Subsets *)
sublists5@list (* Subsequences *)
sublists6@list (* Partition *)
(* {{}, {1}, {1, 2}, {1, 2, 3}, {}, {2}, {2, 3}, {}, {3}, {}} *)
(* {{}, {1}, {1, 2}, {1, 2, 3}, {}, {2}, {2, 3}, {}, {3}, {}} *)
(* {{1, 2, 3}, {1, 2}, {1}, {}, {2, 3}, {2}, {}, {3}, {}, {}} *)
(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {1, 2, 3}, {2, 3}} *)
(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *)
(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *)
</code></pre>
<p>Things to note: <code>ReplaceList</code> orders them by starting index, followed by length. <code>SequenceCases</code> does the same but takes length from longest to shortest. <code>Subsequences</code> and <code>Partitions</code> have them in order of increasing length first. Somewhat weirdly <code>Subset</code> first has the length-0 and length-1 lists, and then lists the remaining ones in the same order as <code>ReplaceList</code>. Note that while the <code>Table</code> approach here uses the same ordering as <code>ReplaceList</code> this one is most easily adapted to different ordering by changing the limits and order of the iterators.</p>
<p>Finally, timing results. Most notably, <code>SequenceCases</code> is unusably slow for larger lists (so I've omitted it from the longer timing results), but <code>Subsequences</code> also gets much slower than rolling your own implementation:</p>
<pre><code>list = RandomInteger[5, 100];
RepeatedTiming[sublists1[list];] (*ReplaceList*)
RepeatedTiming[sublists2[list];] (*Table*)
RepeatedTiming[sublists3[list];] (*SequenceCases*)
RepeatedTiming[sublists4[list];] (*Subsets*)
RepeatedTiming[sublists5[list];] (*Subsequences*)
RepeatedTiming[sublists6[list];] (*Partition*)
(*
{0.00513, Null}
{0.00452, Null}
{0.018, Null} <-- nope
{0.00378, Null}
{0.00208, Null}
{0.00187, Null}
*)
</code></pre>
<pre><code>list = RandomInteger[5, 1000];
RepeatedTiming[sublists1[list];] (*ReplaceList*)
RepeatedTiming[sublists2[list];] (*Table*)
RepeatedTiming[sublists4[list];] (*Subsets*)
RepeatedTiming[sublists5[list];] (*Subsequences*)
RepeatedTiming[sublists6[list];] (*Partition*)
(*
{2.81, Null}
{1.2, Null}
{1.2, Null}
{2.3, Null}
{2.0, Null}
*)
</code></pre>
<pre><code>list = RandomInteger[5, 1200];
RepeatedTiming[sublists1[list];] (*ReplaceList*)
RepeatedTiming[sublists2[list];] (*Table*)
RepeatedTiming[sublists4[list];] (*Subsets*)
RepeatedTiming[sublists5[list];] (*Subsequences*)
RepeatedTiming[sublists6[list];] (*Partition*)
(*
{4.6, Null}
{2.0, Null}
{2.0, Null}
{5.1, Null}
{6.0, Null}
*)
</code></pre>
<p>In summary, the <code>Table</code> solution is the overall winner in terms of flexibility and performance, but it's good to have some other options for a terse quick-and-dirty solution when you can afford to prioritise readability.</p>
<h2>Substrings</h2>
<p>There are several more possibilities for strings (which usually map to an approach above), because you can usually either use specific string functions or convert the string to a list of characters to reduce it to one of the above implementations. (And unfortunately, it's not always the case that the string-based solution is faster.)</p>
<ol>
<li><p>Adapt the above <code>ReplaceList</code> approach by applying <code>Characters</code> to the string first, and then replace <code>{sub}</code> by <code>StringJoin[sub]</code>:</p>
<pre><code>substrings1[string_String] := ReplaceList[
Characters @ string,
{___, sub___, ___} :> StringJoin[sub]
]
</code></pre>
<p>To omit empty strings, use <code>sub__</code>.</p></li>
<li><p>Write a similar version with <code>StringReplaceList</code> and <code>StringExpression</code> for pattern. The catch is to anchor the <code>StringExpression</code>, otherwise you'll get horrible amounts of duplicates.</p>
<pre><code>substrings2[string_String] := StringReplaceList[
string,
StartOfString ~~ ___ ~~ sub___ ~~ ___ ~~ EndOfString :> sub
]
</code></pre>
<p>To omit empty strings, use <code>sub__</code>.</p></li>
<li><p>The <code>StringReplaceList</code> version can be written more compactly with a <code>RegularExpression</code>:</p>
<pre><code>substrings3[string_String] := StringReplaceList[
string,
RegularExpression["^.*(.*).*$"] :> "$1"
]
</code></pre>
<p>To omit empty strings, use the regex <code>"^.*(.+).*$"</code>.</p></li>
<li><p>Adapt the above <code>Table</code> approach, similar to 1, by obtaining the characters first:</p>
<pre><code>substrings4[string_String] := Module[{chars = Characters@string},
Join @@ Table[
StringJoin @@ chars[[i ;; j]],
{i, Length@chars + 1},
{j, i - 1, Length@chars}
]
]
</code></pre>
<p>To omit empty strings, let <code>j</code> start from <code>i</code>.</p></li>
<li><p>Write a similar version which uses string manipulation functions:</p>
<pre><code>substrings5[string_String] :=
Join @@ Table[
StringTake[string, {i, j}],
{i, StringLength@string + 1},
{j, i - 1, StringLength@string}
]
</code></pre>
<p>To omit empty strings, let <code>j</code> start from <code>i</code>.</p></li>
<li><p>Adapt the above <code>SequenceCases</code> approach using <code>StringCases</code>.</p>
<pre><code>substrings6[string_String] := StringCases[string, ___, Overlaps -> All]
</code></pre>
<p>To omit empty strings, use <code>__</code>. <a href="https://mathematica.stackexchange.com/a/72630/2305">Credits for this approach go to SquareOne.</a></p></li>
<li><p>Adapt Kuba's <code>Subsets</code> solution:</p>
<pre><code>substrings7[string_String] := With[{len = StringLength@string},
Join[
ConstantArray["", len],
StringTake[string, #] & /@ Subsets[Range@len, 2]
]
]
</code></pre>
<p>To omit empty strings, ditch <code>Join</code> and <code>ConstantArray</code> and replace <code>2</code> with <code>{1,2}</code>.</p></li>
<li><p>Adapt the <code>Partition</code> solution using <code>StringPartition</code>, which was added in 10.1 and updated in 10.4:</p>
<pre><code>substrings8[string_String] := Catenate@Table[
StringPartition[string, n, 1],
{n, 0, StringLength@string}
]
</code></pre>
<p>To omit empty strings, remove the <code>0,</code>.</p></li>
</ol>
<p>There is no string pendant to <code>Subsequences</code> at this point.</p>
<p>Let's look at the order of the results:</p>
<pre><code>string = "abc";
substrings1[string](*ReplaceList*)
substrings2[string](*StringReplaceList + StringPattern*)
substrings3[string](*StringReplaceList + RegularExpression*)
substrings4[string](*Table + Characters*)
substrings5[string](*Table + StringTake*)
substrings6[string](*StringCases*)
substrings7[string](*Subsets*)
substrings8[string](*StringPartition*)
(*
{"", "a", "ab", "abc", "", "b", "bc", "", "c", ""}
{"", "c", "", "bc", "b", "", "abc", "ab", "a", ""}
{"", "c", "", "bc", "b", "", "abc", "ab", "a", ""}
{"", "a", "ab", "abc", "", "b", "bc", "", "c", ""}
{"", "a", "ab", "abc", "", "b", "bc", "", "c", ""}
{"abc", "ab", "a", "", "bc", "b", "", "c", "", ""}
{"", "", "", "", "a", "b", "c", "ab", "abc", "bc"}
{"", "", "", "", "a", "b", "c", "ab", "bc", "abc"}
*)
</code></pre>
<p>Interestingly, <code>StringReplaceList</code> yields the opposite order from <code>ReplaceList</code> (which I believe is due to greedy matching of the prefix), which itself orders them by starting index first, length second. <code>StringCases</code>, like <code>SubsequenceCases</code> does the same but with decreasing length. <code>Subsets</code> still has its funny order of index pairs and the <code>StringPartition</code> solution sorts them by length. Again, remember that the <code>Table</code> approaches can easily be adapted to yield almost any order you want.</p>
<p>As for performance, and it turns out that the <code>StringReplaceList</code> versions are <em>much</em> slower than the other three. Timing them for a 100-character string:</p>
<pre><code>string = StringJoin @ ConstantArray["a", 100];
Timing[substrings1[string];](*ReplaceList*)
Timing[substrings2[string];](*StringReplaceList + StringPattern*)
Timing[substrings3[string];](*StringReplaceList + RegularExpression*)
Timing[substrings4[string];](*Table + Characters*)
Timing[substrings5[string];](*Table + StringTake*)
Timing[substrings6[string];](*StringCases*)
Timing[substrings7[string];](*Subsets*)
Timing[substrings8[string];](*StringPartition*)
(*
{0., Null}
{3.84375, Null}
{5.28125, Null}
{0.015625, Null}
{0., Null}
{0.015625, Null}
{0., Null}
{0., Null}
*)
</code></pre>
<p>Comparing the others, we find that the string-based <code>Table</code> and <code>Subsets</code> approaches easily outperform the others on large inputs:</p>
<pre><code>string = StringJoin @ ConstantArray["a", 100];
RepeatedTiming[substrings1[string];](*ReplaceList*)
RepeatedTiming[substrings4[string];](*Table + Characters*)
RepeatedTiming[substrings5[string];](*Table + StringTake*)
RepeatedTiming[substrings6[string];](*StringCases*)
RepeatedTiming[substrings7[string];](*Subsets*)
RepeatedTiming[substrings8[string];](*StringPartition*)
(*
{0.0096, Null}
{0.0121, Null}
{0.00421, Null}
{0.00573, Null}
{0.00450, Null}
{0.00612, Null}
*)
</code></pre>
<pre><code>string = StringJoin @ ConstantArray["a", 1000];
RepeatedTiming[substrings1[string];](*ReplaceList*)
RepeatedTiming[substrings4[string];](*Table + Characters*)
RepeatedTiming[substrings5[string];](*Table + StringTake*)
RepeatedTiming[substrings6[string];](*StringCases*)
RepeatedTiming[substrings7[string];](*Subsets*)
RepeatedTiming[substrings8[string];](*StringPartition*)
(*
{5.74, Null}
{4.818, Null}
{1.92, Null}
{4.59, Null}
{2.000, Null}
{2.36, Null}
*)
</code></pre>
|
72,613 | <p>Given a list or string, how do I get a list of all (contiguous) sublists/substrings? The order is not important.</p>
<p>Example for lists:</p>
<pre><code>list = {1, 2, 3};
sublists[list]
(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *)
</code></pre>
<p>Example for strings:</p>
<pre><code>string = "abc";
substrings[string]
(* {"", "", "", "", "a", "b", "c", "ab", "bc", "abc"} *)
</code></pre>
| Kuba | 5,478 | <pre><code>f[l_List] := Take[l, #] & /@ Subsets[Range[Length[l]], {1, 2}];
f[s_String] := StringJoin @@@ f[Characters[s]]
</code></pre>
<hr>
<pre><code>string = "abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcab" <>
"cabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabca";
a = f[string]; // Timing
</code></pre>
<blockquote>
<pre><code>{ 0.015600, Null}
</code></pre>
</blockquote>
<pre><code>Timing[b = substrings1[string];]
</code></pre>
<blockquote>
<pre><code>{0.015600, Null}
</code></pre>
</blockquote>
<pre><code>Complement[b, a]
</code></pre>
<blockquote>
<pre><code>{}
</code></pre>
</blockquote>
<pre><code>f @ Range @ 10
</code></pre>
<blockquote>
<pre><code>{{1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9}, {10}, {1, 2}, {1, 2,
3}, {1, 2, 3, 4}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5, 6}, {1, 2, 3, 4,
5, 6, 7}, {1, 2, 3, 4, 5, 6, 7, 8}, {1, 2, 3, 4, 5, 6, 7, 8, 9}, {1,
2, 3, 4, 5, 6, 7, 8, 9, 10}, {2, 3}, {2, 3, 4}, {2, 3, 4, 5}, {2,
3, 4, 5, 6}, {2, 3, 4, 5, 6, 7}, {2, 3, 4, 5, 6, 7, 8}, {2, 3, 4, 5,
6, 7, 8, 9}, {2, 3, 4, 5, 6, 7, 8, 9, 10}, {3, 4}, {3, 4, 5}, {3,
4, 5, 6}, {3, 4, 5, 6, 7}, {3, 4, 5, 6, 7, 8}, {3, 4, 5, 6, 7, 8,
9}, {3, 4, 5, 6, 7, 8, 9, 10}, {4, 5}, {4, 5, 6}, {4, 5, 6, 7}, {4,
5, 6, 7, 8}, {4, 5, 6, 7, 8, 9}, {4, 5, 6, 7, 8, 9, 10}, {5, 6}, {5,
6, 7}, {5, 6, 7, 8}, {5, 6, 7, 8, 9}, {5, 6, 7, 8, 9, 10}, {6,
7}, {6, 7, 8}, {6, 7, 8, 9}, {6, 7, 8, 9, 10}, {7, 8}, {7, 8,
9}, {7, 8, 9, 10}, {8, 9}, {8, 9, 10}, {9, 10}}
</code></pre>
</blockquote>
|
432,811 | <p>I'm trying to solve $$\operatorname{Arg}(z-2) - \operatorname{Arg}(z+2) = \frac{\pi}{6}$$ for $z \in \mathbb{C}$.</p>
<p>I know that
$$\operatorname{Arg} z_1 - \operatorname{Arg} z_2 = \operatorname{Arg} \frac{z_1}{z_2},$$
but that's only valid when $\operatorname{Arg} z_1 - \operatorname{Arg} z_2 \in (-\pi,\pi]$, so I'm not sure how to even begin solving this.</p>
<p>I'm not familiar with modular arithmetic so if it is possible to solve this without using it then that would be great! (not that I know whether it is required to solve this in the first place)</p>
<p>Thank you in advance.</p>
| lab bhattacharjee | 33,337 | <p>Using <a href="http://en.wikipedia.org/wiki/Atan2#Definition_and_computation" rel="nofollow">this</a> and <a href="http://en.wikipedia.org/wiki/Argument_%28complex_analysis%29#Notation" rel="nofollow">this</a>,</p>
<p>if $z=x+iy,$</p>
<p>Case $1:$ If $x>2,\text{Arg}(z-2)=\arctan \frac y{x-2}$ and $\text{Arg}(z+2)=\arctan \frac y{x+2}$</p>
<p>Case $2:$ If $x=2,\text{Arg}(z-2)=\text{sign}(y)\cdot\frac\pi2($ if $y\ne0)$ and $\text{Arg}(z+2)=\arctan \frac y{x+2}$</p>
<p>Case $3:$ If $ -2<x<2,$</p>
<p>$\text{Arg}(z-2)= \begin{cases} \arctan \frac y{x-2}+\pi &\mbox{if } y\ge0 \\
\arctan \frac y{x-2}-\pi & \mbox{if } y<0\end{cases}$ and $\text{Arg}(z+2)=\arctan \frac y{x+2}$</p>
<p>Case $4:$ If $x=-2,$</p>
<p>$\text{Arg}(z-2)= \begin{cases} \arctan \frac y{x-2}+\pi &\mbox{if } y\ge0 \\
\arctan \frac y{x-2}-\pi & \mbox{if } y<0\end{cases}$ and $\text{Arg}(z+2)=\text{sign}(y)\cdot\frac\pi2($ if $y\ne0)$</p>
<p>Case $5:$ If $x<-2,$ </p>
<p>$\text{Arg}(z-2)= \begin{cases} \arctan \frac y{x-2}+\pi &\mbox{if } y\ge0 \\
\arctan \frac y{x-2}-\pi & \mbox{if } y<0\end{cases}$ and $\text{Arg}(z+2)= \begin{cases} \arctan \frac y{x+2}+\pi &\mbox{if } y\ge0 \\
\arctan \frac y{x+2}-\pi & \mbox{if } y<0\end{cases}$</p>
<p>Now can you deal the problem case by case?</p>
|
29,766 | <p>I'm looking for a news site for Mathematics which particularly covers recently solved mathematical problems together with the unsolved ones. Is there a good site MO users can suggest me or is my only bet just to google for them?</p>
| Willie Wong | 3,948 | <p>As a counter-point to my somewhat flippant previous answer (which only really applies if one is a specialist in the field), if you are looking at a field in which you are not as much a specialist in, I suggest reading the articles from the <a href="http://www.ams.org/publications/journals/journalsframework/aboutbull" rel="nofollow noreferrer">Bulletin of the AMS</a>. The articles are designed to be fairly up-to-date and expository in nature, and often gives the state of the art in their reviews. </p>
<p>Of course, a similar caveat as that to Helge's answer applies: the "news" maybe several months out of date. But considering the glacial paces at which a lot of mathematical refereeing takes place, I think it is quite okay. </p>
<p>In the spirit of this answer, you may also find <a href="https://mathoverflow.net/questions/15366/which-journals-publish-expository-work">Which journals publish expository work?</a> to be useful.</p>
|
174,165 | <p>I have Maths test tomorrow and was just doing my revision when I came across these two questions. Would anyone please give me a nudge in the right direction?</p>
<p>$1)$ If $x$ is real and $$y=\frac{x^2+4x-17}{2(x-3)},$$ show that $|y-5|\geq2$ </p>
<p>$2)$ If $a>0$, $b>0$, prove that $$\left(a+\frac1b\right)\left(2b+\frac1{2a}\right)\ge\frac92$$</p>
| vanna | 30,573 | <p>Using identities $\sin(x)^2 = \frac{1-\cos(2x)}{2}$ and $\cos(x)^2 = \frac{1+\cos(2x)}{2}$ we get</p>
<p>$$ \left(\sin\left(\frac{n\pi}{2}\right)\right)^2 = \frac{1 - \cos(n\pi)}{2} = \frac{1 - (-1)^n}{2} $$
$$ \left(\cos\left(\frac{n\pi}{2}\right)\right)^2 = \frac{1 + \cos(n\pi)}{2} = \frac{1 + (-1)^n}{2}$$</p>
<p>Thus :</p>
<ul>
<li>$\sin\left(\frac{n\pi}{2}\right) = 0$ if $n$ is even and $\pm 1$ if odd</li>
<li>$\cos\left(\frac{n\pi}{2}\right) = 0$ if $n$ is odd and $\pm 1$ if even</li>
</ul>
<p>To conclude we have to solve $\pm 1$ cases. </p>
<ul>
<li>$\cos$ case : $n$ is even, $n=2p$. Then
$$\cos\left(\frac{n\pi}{2}\right) = \cos\left(p\pi\right) = (-1)^p$$
So the result is $1$ if $p$ is even, $-1$ if odd</li>
<li>$\sin$ case : $n$ is odd, $n=2p+1$. Then
$$\sin\left(\frac{n\pi}{2}\right) =\sin\left(p\pi+\frac{\pi}{2}\right) = \cos\left(p\pi\right) = (-1)^p$$
So $1$ if $p$ is even, $-1$ if odd</li>
</ul>
<p>Finally we get the following general expressions :</p>
<p>$$ \cos\left( \frac{n\pi}{2}\right) = (-1)^{\lfloor \frac{n}{2} \rfloor} \left( \frac{1+(-1)^n}{2} \right) $$
$$ \sin\left( \frac{n\pi}{2}\right) = (-1)^{\lfloor \frac{n}{2} \rfloor} \left( \frac{1-(-1)^n}{2} \right) $$</p>
|
93,621 | <p>As we know, most of the spectral sequences are doubly graded. However, this "doubly graded" condition is not a part of the formal definition of spectral sequence. Is there any useful triply (quadruply, quintuply, etc.) graded spectral sequences? If not, is there a hope that some meaningful work can be done with this topic?</p>
| Peter May | 14,447 | <p>Well, there is an eponymous spectral sequence in my thesis (still never published
in full, but there is an announcement, stuff about it in Ravenel's book, and papers by
Tangora and others). Quite generally, take a connected graded algebra $A$ over a field $k$, filter it for example by the powers of its augmentation ideal $IA$ (the elements of positive degree). Then
the associated graded algebra $E^0A$ is bigraded. The filtration leads to a spectral sequence
that converges from $Ext_{E^0A}(k,k)$ to $Ext_A(k,k)$. There is a homological grading and a
bigrading from $E^0A$ that give a trigrading.</p>
<p>Incidentally, in the opposite direction, Bockstein spectral sequences of spaces are monograded.</p>
|
93,621 | <p>As we know, most of the spectral sequences are doubly graded. However, this "doubly graded" condition is not a part of the formal definition of spectral sequence. Is there any useful triply (quadruply, quintuply, etc.) graded spectral sequences? If not, is there a hope that some meaningful work can be done with this topic?</p>
| Neil Strickland | 10,366 | <p>As others have said, there are plenty of examples of spectral sequences that have a third grading such that each $d_r$ preserves the grading up to a shift depending only on $r$. This is all fairly straightforward.</p>
<p>However, there are some more interesting questions along the same lines. When Ravenel was trying to disprove the Telescope Conjecture he had certain spectra with a doubly-indexed filtration, say $X_{ij}$. Given a slope $m>0$ one can define a singly-indexed filtration by $F_kX=\bigcup_{i+mj\geq k}X_{ij}$, and using this we obtain a spectral sequence converging to $\pi_*(X)$ (the "localised parametrised Adams spectral sequence"). Ravenel's approach was to analyse how this changes when we vary $m$. For a long time he was asking whether there was some kind of spectral-sequency gadget that incorporated all values of $m$ at the same time. I don't think anyone ever had a satisfactory answer to that.</p>
|
93,621 | <p>As we know, most of the spectral sequences are doubly graded. However, this "doubly graded" condition is not a part of the formal definition of spectral sequence. Is there any useful triply (quadruply, quintuply, etc.) graded spectral sequences? If not, is there a hope that some meaningful work can be done with this topic?</p>
| Liviu Nicolaescu | 20,302 | <p>J. L. Verdier dissertation (written in the 60s), pre-Ravenel's book, covers multiple-graded complexes. It was reprinted "recently"</p>
<blockquote>
<p>Des catégories dérivées des catégories abéliennes. (French. French summary) [On derived categories of abelian categories]
With a preface by Luc Illusie. Edited and with a note by Georges Maltsiniotis.
Astérisque No. 239 (1996), xii+253 pp. (1997).</p>
</blockquote>
|
873,434 | <p>Let's say I want to find the product of $1,2,3, \dots, 10$. Do I need to do $1 \cdot 2 \cdot 3 \cdot \dots \cdot 10$ manually or is there an easier way to do it?</p>
<p>Something like the sumation of $1$ to $n$ which gives $\frac{n(n+1)}{2}$.</p>
<p>I tried to search but couldn't find a way to do it directly. </p>
| lhf | 589 | <p>The fact that there is a special notation for the factorial suggests that there is no simpler formula for it other than the definition.</p>
<p>Note that there is no special notation for $\sum_{i=1}^n i$, since it can be written as $n(n+1)/2$. (I'm not sure $\binom{n+1}{2}$ counts as special notation in this context.)</p>
<p>If you need a formula, you could use $n! = \Gamma(n+1)$, but using $n!$ is probably much clearer.</p>
|
2,356,905 | <p>I was researching about the Newton-Raphson method and came across <a href="http://www.sosmath.com/calculus/diff/der07/der07.html" rel="nofollow noreferrer">http://www.sosmath.com/calculus/diff/der07/der07.html</a>. On the third line of the page near the end of the line it told us to consider $2/x_1$:</p>
<blockquote>
<p>Consequently $3/2>\sqrt{2}$. If we now consider $2/x_1=4/3$, its square $16/9$ is of course smaller than $2$, so $2/x_1<\sqrt{2}$.</p>
</blockquote>
<p>Why is that? Why did they divide $2$ by $x_1$ and not divide $1$ by $x_1$?</p>
| Trevor Gunn | 437,127 | <p>If $x > \sqrt{2}$ then $\dfrac1x < \dfrac{1}{\sqrt{2}}$. See the problem? $\dfrac{1}{\sqrt{2}}$ is not $\sqrt{2}$ but $\dfrac{2}{\sqrt{2}}$ is.</p>
|
2,356,905 | <p>I was researching about the Newton-Raphson method and came across <a href="http://www.sosmath.com/calculus/diff/der07/der07.html" rel="nofollow noreferrer">http://www.sosmath.com/calculus/diff/der07/der07.html</a>. On the third line of the page near the end of the line it told us to consider $2/x_1$:</p>
<blockquote>
<p>Consequently $3/2>\sqrt{2}$. If we now consider $2/x_1=4/3$, its square $16/9$ is of course smaller than $2$, so $2/x_1<\sqrt{2}$.</p>
</blockquote>
<p>Why is that? Why did they divide $2$ by $x_1$ and not divide $1$ by $x_1$?</p>
| browngreen | 321,445 | <p>Since $x_1$ is close to $\sqrt2$, taking $2\over x_1$ will get you another number close to $\sqrt2$ but on the other side from $x_1$. This way you can take the average of $x_1$ and the new number to approximate $\sqrt2$.</p>
|
1,682,341 | <p>While looking at another question on this site about constructable numbers I started wondering. If you can take a countable number of steps (possibly infinite) can you draw an interval of a length corresponding to a computable number?</p>
<p>More strictly if I have a unit interval, a straight edge, a compass, a finite list of instructions (which can include instructions to repeat sequences of the instructions until an event occurs, instructions to draw lines using my tools and instructions on labeling points) and the ability to carry out a countably infinite number of actions in a finite time. Can I construct a interval that corresponds to a given computable number?</p>
| Ross Millikan | 1,827 | <p>You can certainly define a (possibly infinite) sequence of segments whose total length has as its limit any computable number. You can compute the binary expansion of the number. The integer part is easy, just add up the proper number of $1$'s. Then add on $\frac 1{2^n}$ if the $n^{\text{th}}$ bit of the expansion is $1$. This will give you the correct limit.</p>
|
387,295 | <p>I need to find $$\underset{n \to \infty}{\lim} \underset{x\in [0,1]}{\sup} \left| \frac{x+x^{2}}{1+n+x} \right|.$$ How to show that supremum will be at the point $x=1$?</p>
| Community | -1 | <p>First let us find the supremum of $f_n(x) = \dfrac{x+x^2}{1+n+x}$. We have
$$f_n'(x) = \dfrac{(1+n+x)(1+2x) - (x+x^2)}{(1+n+x)^2} = \dfrac{x(2n+x+2)+n+1}{(1+n+x)^2} > 0 \,\,\,\, \forall x \in [0,1]$$
Hence, $f_n(x)$ is an increasing function in the interval $[0,1]$. Hence, the supremum is attained at $x=1$. We hence get that
$$\sup_{x \in [0,1]} f_n(x) = \dfrac2{n+2} \implies \lim_{n \to \infty} \sup_{x \in [0,1]} f_n(x) = \lim_{n \to \infty} \dfrac2{n+2} = 0$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.