qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
488,498
<p>Find $dy/dx$ in terms of $x$ and $y$ when $$x^2y^3=7x−3y$$</p> <p>Not sure how to start here, would be nice with some pointers </p>
math110
58,742
<p><strong>solution 1:</strong> $$x^2y^3=7x-3y$$ $$\Longrightarrow 2xy^3+x^2\cdot 3y^2\cdot y'=7-3y'$$ $$\dfrac{dy}{dx}=y'=\dfrac{7-2xy^3}{3x^2y^2+3}$$</p> <p><strong>solution 2:</strong></p> <p>let $$F(x,y)=x^2y^3-7x+3y$$ $$F'_{x}(x,y)=2xy^3-7$$ $$F'_{y}(x,y)=3x^2y^2+3$$ so $$\dfrac{dy}{dx}=-\dfrac{F'_{x}(x,y)}{F'_{y}(x,y)}=\dfrac{7-2xy^3}{3x^2y^2+3}$$</p>
3,786,797
<p>I have two variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> which are measured at two time steps <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Available to me is the percent change from the first timestep to the second, relative to the first i.e. <span class="math-container">$\frac{x_1 - x_0}{x_0}$</span> and <span class="math-container">$\frac{y_1 - y_0}{y_0}$</span>. I would like to recover the ratio <span class="math-container">$\frac{x_1}{y_1}$</span>. I've tried adding 1 = <span class="math-container">$\frac{x_0}{x_0} = \frac{y_0}{y_0}$</span> to the two percent changes and then dividing them but this only gets me to <span class="math-container">$\frac{x_1}{y_1} \cdot \frac{y_0}{x_0}$</span>. It seems to me that my problem has 2 equations and 4 unknowns, even though 2 of the unknowns don't appear in the desired ratio, making it have infinite solutions. Is that correct? Otherwise, is there another manipulation I can make to get my desired ratio?</p>
Graham Kemp
135,106
<p>  The first task is to decide were to <em>place</em> men and women so that at least one woman is placed between each man, (in doing so, decide which of these six spaces to put the remaining woman place: <span class="math-container">$\rm\_mw\_mw\_mw\_mw\_m\_$</span>). <span class="math-container">$$\rm wmwmwmwmwm\\ mwwmwmwmwm\\\ddots\\ mwmwmwmwmw$$</span></p> <p>  Then count ways to put the five distinct males and five distinct females in each from these arrangements of places.</p> <p>Thus the answer of 86,400 is obtained.</p> <hr /> <blockquote> <p>My idea was to count the number of ways I can seat them all with 2 men being together and subtract <em>(from)</em> <span class="math-container">$10!$</span>.</p> </blockquote> <p><span class="math-container">${_5\mathsf P_2}\cdot 9!$</span> appears to be attempting to count ways to select and arrange a pair of men, then arrange that pair and eight singletons into the same line. This does not consider that more than two pairs might form, or even triples ...</p>
2,592,581
<p>Lets define the following series $$S_n =\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}.$$ I know that $S_0$ does not converge so let's suppose $n \in N$ and define $S_n$ for some $n$. We have $S_1=1$ , $S_2=\frac1 4$ , $S_3=\frac 1 {18}$ , $S_4=\frac 1{96}$ , $S_5=\frac 1{600}$ etc..<br/><br/> the numerator in all results is 1<br/>the pattern in denominator is $[1,4,1,96,600...]$ and <a href="https://oeis.org/A001563" rel="nofollow noreferrer">can be found here</a> that is equal to $n*n!$ Finally I want to prove the general equality:<br/><br/> $\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}=\frac 1 {n*n!}$</p>
Masacroso
173,262
<p>I will add another answer with a different approach (less elementary but also interesting). From <a href="https://www.cs.purdue.edu/homes/dgleich/publications/Gleich%202005%20-%20finite%20calculus.pdf" rel="nofollow noreferrer">finite calculus theory</a> your series can also be stated as <span class="math-container">$$\sum_{k=1}^\infty\frac1{k^\overline n}=\frac1{1^\overline n}+\sum_{k=2}^\infty\frac1{k^\overline n}=\frac1{n!}+\sum\nolimits_2^\infty (k-1)^\underline{-n}\delta k$$</span> where <span class="math-container">$a^\overline c$</span> is a rising factorial and <span class="math-container">$a^\underline c$</span> is a falling factorial. Hence </p> <p><span class="math-container">$$\begin{align}\sum_{k=1}^\infty\frac1{k^\overline n}&amp;=\frac1{n!}+\frac{(k-1)^\underline{1-n}}{1-n}\bigg|_{k=2}^{k\to\infty}\\&amp;=\frac1{n!}+0-\frac{1^\underline{1-n}}{1-n}\\&amp;=\frac1{n!}+\frac1{(n-1)2^\overline{n-1}}\\&amp;=\frac1{n!}+\frac1{(n-1)n!}\\ &amp;=\frac1{n!\, n}\end{align}$$</span></p> <p>because <span class="math-container">$\frac{a^\underline m}{m}$</span> is a primitive (in the finite calculus sense) of <span class="math-container">$a^\underline {m-1}$</span>, in the domain where both expressions are well-defined.</p>
33,125
<p>For some fixed $n$ define the quadratic form $$Q(x,y) = x^2 + n y^2.$$</p> <p>I think that if $Q$ represents $m$ in two different ways then $m$ is composite.</p> <p>I can prove this for $n$ prime. I was hoping someone could give me a hint towards proving this result for general $n$? Also would be interested in generalizations if any are known! Thanks a lot.</p>
Bill Dubuque
242
<p>Below is Lucas' classic proof, from his <em>Theorie des nombres</em>, 1891, as described in section 215 of Mathews: <a href="http://books.google.com/books?id=iQ_vAAAAMAAJ&amp;pg=PP1" rel="noreferrer">Theory of Numbers.</a> <img src="https://i.stack.imgur.com/SzPeR.jpg" alt="enter image description here"> <img src="https://i.stack.imgur.com/daeoD.jpg" alt="enter image description here"></p>
1,737,835
<p>You have decided to buy candy for the trick-or-treaters and have estimated there will be 200 children coming to your door, and plan to give each children three pieces of candy. You have decided to offer Twix and 3 Musketeers. The cost of buying these two candies is<br> $$C= 5T^2 + 2TM + 3M^2 + 800$$ Where T is the number of Twix and M is the number of 3 Musketeers. How many of each candy should you get to minimize the cost? </p>
Josse van Dobben de Bruyn
246,783
<p>Let $X \subseteq \mathbb{R}^n$ be any compact convex set that is symmetric about the origin and contains an open neighbourhood of $0$. Then we can define the <em><a href="https://en.wikipedia.org/wiki/Minkowski_functional" rel="nofollow noreferrer">Minkowski functional</a></em> $p_X : \mathbb{R}^n \to \mathbb{R}_{\geq 0}$ by $$ p_X(y) = \inf\big\{\lambda \in \mathbb{R}_{&gt; 0} \: : \: \lambda^{-1}y\in X\big\}. $$ One easily shows that $p_X$ is a well-defined norm on $\mathbb{R}^n$ and that $X$ is precisely the closed unit ball for this norm. (Here you use that any open neighbourhood of $0$ is <a href="https://en.wikipedia.org/wiki/Absorbing_set" rel="nofollow noreferrer">absorbing</a>, so that there always exists some $\lambda &gt; 0$ such that $\lambda^{-1}y \in X$ holds.)</p> <p>Conversely, let $\lVert\:\cdot\:\rVert$ be any norm on $\mathbb{R}^n$, then one can prove that the closed unit ball with respect to this norm is compact, convex and symmetric about the origin and contains an open neighbourhood of $0$. (Here you need that all norms on $\mathbb{R}^n$ are equivalent, i.e. induce the same topology.) This gives us a bijective correspondence between <a href="https://en.wikipedia.org/wiki/Convex_body" rel="nofollow noreferrer">centrally symmetric bodies</a> in $\mathbb{R}^n$ and norms on $\mathbb{R}^n$.</p> <hr> <p>To answer your question: basically, any form is possible, as long as there is no obvious reason why it shouldn't be. Specifically, any polytope can be realised as (a translation of) the closed unit ball of some norm if and only if it meets the following criteria:</p> <ul> <li>It is symmetric about its center;</li> <li>It is convex;</li> <li>It is full-dimensional (not contained in some affine hyperplane);</li> <li>It is bounded.</li> </ul> <p>Here I implicitly use the following properties that are intuitively clear but nevertheless require some proof.</p> <blockquote> <p><strong>Proposition 1.</strong> A convex set $X \subseteq \mathbb{R}^n$ has empty interior if and only if it is contained in some affine hyperplane.</p> </blockquote> <p>A proof of this proposition can be found on <a href="https://math.stackexchange.com/q/1590133/246783">Mathematics Stack Exchange</a> and <a href="https://rkganti.wordpress.com/2012/02/01/topology-of-convex-sets/" rel="nofollow noreferrer">elsewhere on the internet</a>.</p> <blockquote> <p><strong>Proposition 2.</strong> Let $X \subseteq \mathbb{R}^n$ be a set that meets all four of the above criteria. Then $X$ contains an open neighbourhood of its center.</p> <p><em>Sketch of proof.</em> For this we use that <a href="https://math.stackexchange.com/q/471592/246783">the interior of a convex set is again convex</a>. Assume without loss of generality that the center of $X$ is the origin (translate if necessary). Since $X$ contains an open neighbourhood of some $x\in X$, by symmetry it also contains an open neighbourhood of $-x$ (we can reflect the entire open neighbourhood in the origin). Thus, $x$ and $-x$ are interior points. Since $\text{Int}(X)$ is convex, it follows that $0\in\text{Int}(X)$ holds as well.$\hspace{2mm}\blacksquare$</p> </blockquote> <p>Finally, I have used the following assumption:</p> <blockquote> <p><strong>Assumption.</strong> Polytopes are already closed to begin with (by most definitions they are).</p> </blockquote>
3,911,610
<p>Given the equation <span class="math-container">$PV=nRT$</span> if we evaluate <span class="math-container">$\frac{\partial{P}}{\partial{V}} * \frac{\partial{V}}{\partial{T}} * \frac{\partial{T}}{\partial{P}}$</span> through taking all three partials and multiplying, we get <span class="math-container">$-1$</span>. Why can we not cancel the fractions and get <span class="math-container">$1$</span> instead?</p>
mechanodroid
144,766
<p>Let <span class="math-container">$\{x_1, \ldots, x_n\}$</span> be a basis in the domain such that <span class="math-container">$\{Ax_1, \ldots, Ax_n\}$</span> is a basis in the codomain.</p> <p>For injectivity, we wish to prove that <span class="math-container">$A$</span> has trivial kernel. Assume <span class="math-container">$Ax = 0$</span> and write <span class="math-container">$x = \sum_{i=1}^n \alpha_ix_i$</span>. We have <span class="math-container">$$0 = Ax = A\left(\sum_{i=1}^n \alpha_ix_i\right) = \sum_{i=1}^n \alpha_i Ax_i$$</span> which implies <span class="math-container">$\alpha_1 = \cdots = \alpha_n = 0$</span> since <span class="math-container">$\{Ax_1, \ldots, Ax_n\}$</span> is linearly independent. Hence <span class="math-container">$x = 0$</span>.</p> <p>For surjectivity, any <span class="math-container">$y$</span> in the codomain can be written as a linear combination <span class="math-container">$y = \sum_{i=1}^n \alpha_i Ax_i$</span> since <span class="math-container">$\{Ax_1, \ldots, Ax_n\}$</span> spans the codomain. Then <span class="math-container">$$A\left(\sum_{i=1}^n \alpha_ix_i\right) = \sum_{i=1}^n \alpha_i Ax_i = y$$</span> so <span class="math-container">$y$</span> is in the image of <span class="math-container">$A$</span>.</p> <p>Hence <span class="math-container">$A$</span> is bijective.</p>
3,911,610
<p>Given the equation <span class="math-container">$PV=nRT$</span> if we evaluate <span class="math-container">$\frac{\partial{P}}{\partial{V}} * \frac{\partial{V}}{\partial{T}} * \frac{\partial{T}}{\partial{P}}$</span> through taking all three partials and multiplying, we get <span class="math-container">$-1$</span>. Why can we not cancel the fractions and get <span class="math-container">$1$</span> instead?</p>
Ben Grossmann
81,360
<p>I assume that we consider finite dimensional spaces to keep the notation approachable. That said, the proof can be extended to the case of arbitrary dimension without too much effort.</p> <p>Suppose that <span class="math-container">$T:V \to W$</span>. Let <span class="math-container">$\{v_1,\dots,v_n\}$</span> be a basis of <span class="math-container">$V$</span>. By the information in the question, <span class="math-container">$\{T(v_1),\dots,T(v_n)\}$</span> is a basis of <span class="math-container">$W$</span>. I claim that there is a unique linear transformation <span class="math-container">$S$</span> for which <span class="math-container">$$ S(Tv_j) = v_j, \quad j = 1,\dots, n. $$</span> In particular: because <span class="math-container">$\{T(v_1),\dots,T(v_n)\}$</span> is a basis, there is indeed such a linear transformation, and <span class="math-container">$S$</span> is uniquely specified by this condition.</p> <p>With that, it suffices to note that <span class="math-container">$S \circ T$</span> and <span class="math-container">$T \circ S$</span> are identity maps. Since <span class="math-container">$T$</span> has an inverse, it must be bijective.</p>
3,701,175
<p>If a function <span class="math-container">$h(x)$</span> satisfies:</p> <p>exists partition <span class="math-container">$ P=\left\{ a_{0},a_{1},...,a_{n}\right\} $</span></p> <p>of the interval [a,b], such that <span class="math-container">$ h $</span> is constant in the segment <span class="math-container">$(a_{k-1},a_{k}) $</span> for any <span class="math-container">$1\leq k\leq n $</span> then we call <span class="math-container">$h(x) $</span> a step function.</p> <p>Let <span class="math-container">$ f(x) $</span> be integrable function in the interval [a,b] and let <span class="math-container">$ \varepsilon&gt;0 $</span>. prove that exists step function <span class="math-container">$ h $</span> that satisfies </p> <p><span class="math-container">$ \intop_{a}^{b}|f\left(x\right)-h\left(x\right)|dx&lt;\varepsilon $</span></p> <p>This is actually a part from a bigger proof. Im trying to prove that for any integrable function <span class="math-container">$ f $</span> in an interval <span class="math-container">$[a,b]$</span>, for any <span class="math-container">$\varepsilon&gt;0 $</span> exists continious function <span class="math-container">$g(x) $</span> such </p> <p><span class="math-container">$ \intop_{a}^{b}|f\left(x\right)-g\left(x\right)|dx&lt;\varepsilon $</span></p> <p>So, part 1 of the proof is to prove the thing I mentioned. and part 2 is to prove that for any step function <span class="math-container">$h(x)$</span> in the interval <span class="math-container">$[a,b]$</span>, and for any <span class="math-container">$ \varepsilon&gt;0 $</span> exists continious function <span class="math-container">$g(x)$</span> in the interval <span class="math-container">$[a,b]$</span> that satisfies</p> <p><span class="math-container">$ \intop_{a}^{b}|h\left(x\right)-g\left(x\right)|dx&lt;\varepsilon $</span></p> <p>I already proved that any step function in any interval <span class="math-container">$[a,b] $</span> is integrable. Im not sure how to prove the parts I mentioned. Thanks in advance.</p>
RRL
148,510
<p>If <span class="math-container">$f$</span> is <strong>Riemann integrable</strong>, then for any <span class="math-container">$\epsilon &gt; 0$</span> there exists <span class="math-container">$P: a = x_0 &lt; x_1&lt; \ldots x_n = b$</span>, a partition of <span class="math-container">$[a,b]$</span>, such that</p> <p><span class="math-container">$$\tag{*}0 \leqslant \int_a^b f(x) \, dx - L(P,f) = \int_a^b f(x) \, dx - \sum_{j=1}^n m_j(x_j - x_{j-1}) &lt; \epsilon,$$</span></p> <p>where <span class="math-container">$m_j = \inf \{f(x): x_{j-1} \leqslant x \leqslant x_j\}$</span>. Here, of course, <span class="math-container">$L(P,f)$</span> denotes the lower Darboux sum.</p> <p>Define the function</p> <p><span class="math-container">$$h(x) = \begin{cases}m_j ,&amp; x_{j-1} \leqslant x &lt; x_j \,\, (j = 1, \ldots, n)\\ m_n, &amp; x= x_n \end{cases}$$</span></p> <p>Clearly, <span class="math-container">$h$</span> is a step function since it assumes constant values on disjoint intervals. Furthermore, we have</p> <p><span class="math-container">$$\int_{x_{j-1}}^{x_j} h(x) \, dx = m_j(x_j - x_{j-1})$$</span></p> <p>since the value of <span class="math-container">$h(x)$</span> at <span class="math-container">$x = x_j$</span> is irrelevant in computing the integral.</p> <p>Thus,</p> <p><span class="math-container">$$\int_a^b f(x) \, dx - \sum_{j=1}^n m_j(x_j - x_{j-1}) = \int_a^b f(x) \, dx -\int_a^b h(x) \, dx = \int_a^b (f(x) - h(x)) \, dx $$</span></p> <p>We also have <span class="math-container">$f(x) \geqslant h(x)$</span> for all <span class="math-container">$x \in [a,b]$</span> and it follows that <span class="math-container">$f(x) - h(x) = |f(x) - h(x)|$</span>. Substituting into (*), we get</p> <p><span class="math-container">$$\int_a^b|f(x) - h(x)| \, dx &lt; \epsilon$$</span></p>
3,809,026
<p>I'm trying to solve this problem in the implicit differentiation section of the book I'm going through:</p> <blockquote> <p>The equation that implicitly defines <span class="math-container">$f$</span> can be written as</p> <p><span class="math-container">$y = \dfrac{2 \sin x + \cos y}{3}$</span></p> <p>In this problem we will compute <span class="math-container">$f(\pi/6)$</span>. The same method could be used to compute <span class="math-container">$f(x)$</span> for any value of <span class="math-container">$x$</span>.</p> <p>Let <span class="math-container">$a_1 = \dfrac{2\sin(\pi/6)}{3} = \dfrac{1}{3}$</span></p> <p>and for every positive integer <span class="math-container">$n$</span> let</p> <p><span class="math-container">$a_{n+1} = \dfrac{2\sin(\pi/6) + \cos a_n}{3} = \dfrac{1 + \cos a_n}{3}$</span></p> <p>(a) Prove that for every positive integer <span class="math-container">$n$</span>, <span class="math-container">$|a_n - f(\pi/6)| \leq 1/3^n$</span> (Hint: Use mathematical induction) (b) Prove that <span class="math-container">$\lim_{n \to \infty} a_n = f(\pi/6)$</span></p> </blockquote> <p>Now, I'm not sure how to solve <span class="math-container">$f(\pi/6)$</span> in the base case of the induction proof.</p> <p>Also, does this above pattern of assuming <span class="math-container">$a_{n+1}$</span> and using that to compute <span class="math-container">$f(x)$</span> for any value of <span class="math-container">$x$</span> has any name ? I would like to read more about it.</p>
aangulog
597,826
<p>If the base case doesn't work for the given range then the statment to prove is false.<br /> For example if you are given &quot;Show that for every <span class="math-container">$\mathbb{N}$</span> the inequality <span class="math-container">$0&lt;x-1$</span> holds&quot; as for <span class="math-container">$x=1$</span> the inequality fails then the initial steatment is false</p> <p>It doesn't matter if it work from <span class="math-container">$2$</span> onwards, you just found a conterexample</p>
309,841
<p>Let $m$ be a positive integer. Prove that the dihedral group $D_{4m+2}$ of order $8m+4$ contains a subgroup of order 4.</p> <p>Im fairly stumped as to where to start with this one, any input would be helpful. Thanks</p>
Vectornaut
16,063
<p>This is a really insightful question! You're pointing out that, in quantum mechanics, there are two different notions of "tensor product":</p> <ul> <li>The composition of physical systems (a physical notion).</li> <li>The tensor product of Hilbert spaces (a mathematical notion).</li> </ul> <p>You say you can't see how these two notions are equivalent, and you're right! They aren't.</p> <p>In different kinds of physical theories, the composition of physical systems is represented mathematically in different ways. For example:</p> <ul> <li>In statistical mechanics, each type of physical system is described by a <em>measurable space</em> (a space that can carry probability distributions), and the composition of systems is described by the Cartesian product of measurable spaces.</li> <li>In classical mechanics, each type of physical system is described by a <em>symplectic manifold</em> (a space where Hamilton's equations make sense), and composition is described by the <em>symplectic product</em> of symplectic manifolds.</li> </ul> <p>There's a mathematical idealization that perfectly captures the physical idea of composition of systems: the "monoidal tensor product" in a <em>monoidal category</em>. The tensor product of vector spaces, the Cartesian product of measurable spaces, and the symplectic product of symplectic manifolds are all examples of monoidal tensor products.</p> <p>To learn more about monoidal categories, and their deep relationship with physics, I recommend Bob Coecke's <a href="http://arxiv.org/abs/0808.1032" rel="nofollow">"Introducing categories to the practicing physicist."</a> You might also want to check out a classic but more background-intensive article by John Baez, called <a href="http://math.ucr.edu/home/baez/quantum/" rel="nofollow">"Quantum Quandaries: A Category-Theoretic Perspective."</a></p> <hr> <p><em>p.s.</em></p> <p>As I've been saying, the use of the tensor product of Hilbert spaces to represent the composition of physical systems is not requried—in fact, even the use of Hilbert spaces to represent types of physical systems is not required! So, why do we always use the mathematical machinery of Hilbert spaces, linear maps, and tensor products to build our quantum theories?</p> <p>If you learn enough category theory, you can find a really cool answer in Chris Heunen's thesis, <a href="http://citeseerx.ist.psu.edu/viewdoc/summary;jsessionid=1EF019DE9DB219193FFA9A4A8A166606?doi=10.1.1.189.2198" rel="nofollow">"Categorical quantum models and logics."</a> Heunen shows that the bundle of mathematical machinery I mentioned above (usually called "the category of Hilbert spaces," or <strong>Hilb</strong> for short) is sophisticated enough to build any theory that satisfies certain conditions (see Definitions 3.7.1 and 3.6.1 and Theorem 3.7.8). The conditions look pretty technical, but many of them—maybe all of them!—can be thought of in an intuitive physical way. Roughly speaking, I'm pretty sure that...</p> <ul> <li>"It has a dagger" means your theory has a notion of time-reversal. The adjective "dagger" that shows up in the other conditions means "compatible with time reversal."</li> <li>"It is symmetric dagger monoidal" means your theory has a notion of composition of physical systems.</li> <li>"It has finite dagger biproducts" means that if your theory can describe systems of type <code>proton</code> and systems of type <code>neutron</code>, it can also describe systems of type <code>proton or neutron, but I don't know which</code>, and that you can always do an experiment to determine whether a system of this type is a <code>proton</code> or a <code>neutron</code>.</li> <li>"Every dagger mono is a dagger kernel" means that if the type <code>car</code> has a subtype <code>awesome car</code>, there's also a subtype <code>not-awesome car</code>.</li> <li>"It has finite dagger equalisers" means that for any two experiments that can be done on the same type of system, there's a subtype describing systems for which the outcomes of the experiments are always the same.</li> </ul>
449,617
<p>I would like to know how do you solve summation expressions in an easy way (from my understanding). I am computer science student analyzing for loops and finding it's time complexity.</p> <p>e.g</p> <p><strong>Code</strong>:</p> <pre><code> for i=1 to n x++ end for </code></pre> <p><strong>Summation</strong>:</p> <pre><code> n ∑ 1 i=1 </code></pre> <p><strong>Solving</strong>:</p> <pre><code> = ∑ [n-1+1] (topLimit - bottomLimit + 1) = n (summation formula said ∑ 1 = 1+1+1+1+ ... + 1 = n) </code></pre> <p>The time complexity of the for loop is: O(n)</p> <hr> <p><strong>Code</strong></p> <pre><code>for(i=0; i&lt;=n i++) for(j=i; j&lt;=n; j++) x++; </code></pre> <p><strong>Question</strong>: </p> <p>How do you solve:</p> <pre><code> n n ∑ [∑ 1] i=1 j=i </code></pre> <p><strong>My Solution</strong>:</p> <pre><code> n = ∑ [n-i+1] i=1 = not sure how to progress from here (should i do another topLimit - bottomLimit + [n-i+1]?) </code></pre> <p>The problem i am having is simplifying so i can get to say i, 1/i, i^2, .. i.e something i can use a summation formula on. I know the answer supposed to be: (n(n+1))/2.</p>
MJD
25,554
<p>Hint for #1: Suppose that $G_0$ is the purported graph, which contains any graph $H$ unless $H$ in turn contains $K_\omega$, the complete countable graph. Make a new graph, $G'_0$, by appending a new vertex $v_0$ to $G_0$ and adding edges from $v_0$ to every vertex of $G_0$. Then $G'_0$ does not contain $K_\omega$ (you prove this), so by hypothesis there must be a copy of $G'_0$ strictly inside of $G_0$, which we can call $G'_1$, and $G'_1$ must have an analogous vertex $v_1\ne v_0$.</p> <p><img src="https://i.stack.imgur.com/URFWV.png" alt="diagram of graph"></p> <p>Now you fill in the rest.</p>
63,601
<p>I know that if a ring has a multiplicative identity, then the multiplicative identity must be unique. Are there simple-to-describe examples of rings with two (or more) multiplicative right-identities?</p>
Salvo Tringali
16,537
<p>I was browsing some "What is...?" threads, and bumped in here: Let $n$ be a positive integer, and consider the subring of the <a href="https://en.wikipedia.org/wiki/Opposite_ring" rel="nofollow">opposite</a> of the ring of $n$-by-$n$ matrices over a commutative unital ring $\mathbb A = (A, +, \cdot)$ consisting of those matrices all of whose rows, except at most for the first, are zero; this has $|A|^{n-1}$ right identities, given by those matrices whose first row is any vector of $A^n$ with first element equal to $1_\mathbb{A}$.</p>
18,564
<p>I want to get the eight points of intersection from the equations <code>2 Abs[x] + Abs[y] == 1</code> and <code>Abs[x] + 2 Abs[y] == 1</code>. To solve these equations, I tried</p> <pre><code>Solve[{2 Abs[x] + Abs[y] == 1, Abs[x] + 2 Abs[y] == 1}, {x, y}] </code></pre> <p>but could only get the four points. I then tried </p> <pre><code>y /. Quiet@Solve[#, y] /. Abs[x_] -&gt; {x, -x} &amp; /@ {2 Abs[x] + Abs[y] == 1, Abs[x] + 2 Abs[y] == 1} p = ({x, y} /. Solve[y == #] &amp; /@ {{1 - 2 x, (1 - x)/2}, {(1 - x)/2, ( 1 + x)/2}, {(1 + x)/2, 1 + 2 x}, {1 + 2 x, -1 - 2 x}, {-1 - 2 x, 1/2 (-1 - x)}, {1/2 (-1 - x), 1/2 (-1 + x)}, {1/2 (-1 + x), -1 + 2 x}, {-1 + 2 x, 1 - 2 x}})~ Flatten~1 </code></pre> <blockquote> <p>{{1/3, 1/3}, {0, 1/2}, {-(1/3), 1/3}, {-(1/2), 0}, {-(1/3), -(1/3)}, {0, -(1/2)}, {1/3, -(1/3)}, {1/2, 0}}</p> </blockquote> <p>It works, but I don't like it. Could you recommend a better method?</p> <p><img src="https://i.stack.imgur.com/0GPLC.jpg" alt="enter image description here"></p>
rm -rf
5
<p>Your equations only have 4 points of intersections, not 8, so <code>Solve</code> is returning the correct result. To get the remaining 4, as you have indicated in the figure, you'll also have to consider the x and y axes. You can manually solve each curve with the appropriate axes (it seems like you don't care for the farthest points of intersection with the axes). The following code will solve for all of them, and only filter those that fall within a certain distance from the center (which seems to be your intent).</p> <pre><code>eqns = {2 Abs[x] + Abs[y] == 1, Abs[x] + 2 Abs[y] == 1, x == 0, y == 0}; pts = Flatten[Solve[#, {x, y}] &amp; /@ Subsets[eqns, {2}], 1] /. Rule[_, x_] :&gt; x; filtPts = Select[pts, 0 &lt; EuclideanDistance[{0, 0}, #] &lt;= 0.5 &amp;]; ContourPlot[{2 Abs[x] + Abs[y] == 1, Abs[x] + 2 Abs[y] == 1}, {x, -1, 1}, {y, -1, 1}, Epilog -&gt; {Red, PointSize@Large, Point@filtPts}] </code></pre> <p><img src="https://i.stack.imgur.com/mzc71.png" alt=""></p>
1,250,322
<p>This is the recurrence relation I am trying to solve: \begin{align} T(n) &amp; = 2 \cdot T \left( \frac{n}{4} \right) + 16, \\ T(1) &amp; = c. \end{align} I broke this down (i.e., solved this recurrence relation) to $ \sqrt{2} * c * n + 32 * \sqrt{2} * n - 32 $, which runs in tight bounds $ \Theta(n) $. Can you guys confirm this? I’ll show more of my work if this answer is incorrect.</p>
committedandroider
170,019
<p>I actually did most of the steps of solving this recurrence relation correctly. If anyone's having as similar issue, here is my mistake </p> <p>My issue was that I simplified $2^{\frac{log_2n}{2}}$ as $2^{\frac12}*2^{log_2n}$, not $(2^{log_2n})^{\frac12}$. If I used the exponential rule correctly, I would have gotten $\theta(n^{\frac12})$.</p>
1,160,741
<p>Prove that continuous functions on $\mathbb{R}$ with period $1$ can be uniformly approximated by trigonometric polynomials, i.e. functions in the linear span of $\{e^{−2πinx}\}_{n \in \mathbb{Z}}$. Explain why $n \in \mathbb{N}$ is not enough.</p> <p>I think I need to restrict the domain into a compact set and consider the interval $[0,1]$. Since the functions are periodic with period $1$ the approximation on $[0,1]$ can be translated to any point on $\mathbb{R}$. It is easy to show that linear span of $\{e^{−2πinx}\}_{n \in \mathbb{Z}}$ is an algebra and that this algebra has the constant function. </p> <p>How do I show that this algebra is closed? </p> <p>Why is $n \in \mathbb{N}$ is not enough?</p>
Cameron Williams
22,551
<p>The fundamental tenet of Stone-Weierstrass is an algebra of functions $\mathcal{A}$ on $X$ that is closed under complex conjugation, separate points and for every point $x\in X$ there is an $f\in\mathcal{A}$ such that $f(x)\neq 0$.</p> <p>Your algebra satisfies the first property since $e^{-2\pi inx} = \overline{e^{2\pi inx}}$ so it is closed under conjugation.</p> <p>Your algebra of trigonometric polynomials satisfies the latter property since for any rational $q$ which we write in lowest terms: $q = \frac{m}{n}$, then if we consider $f(x) = e^{2\pi inx}$, $f(q) = 1$. Since every irrational $x$ can be made close to a rational and the complex exponentials are continuous, we can find $f$ such that $|f(x)|&gt;0$ (by a modest adaptation of the above reasoning). (You could pick $f$ to be a constant function if you were so inclined.)</p> <hr> <p>We need only then to see that it separates points.</p> <p>Suppose $x\neq y\in[0,1)$. We wish to find $f$ such that $f(x)\neq f(y)$. We will deal with the case that $x=0$ and $y=1$ after. If $x$ and $y$ are rational, write $x = \frac{m}{n}$ and $y=\frac{m'}{n'}$. Without loss of generality, assume $n &lt; n'$. If $n$ does not divide $n'$, consider $f(z) = e^{-2\pi in'z}$; $f(y) = 1$ but $f(x)\neq 1$. If $n$ does divide $n'$, consider $f(z) = e^{-2\pi i(n'+1)z}$. Then $f(y) = e^{-2\pi i\frac{m'}{n'}}$ and $f(x) = e^{-2\pi i\frac{m}{n}}$. These are only equal if $\frac{m'}{n'} = \frac{m}{n}+2k\pi$. However since $y,x\in[0,1]$ and $x\neq y$, this can't happen so $f(x)\neq f(y)$.</p> <p>If they are irrational, find rationals that are close to them and repeat the same kind of analysis as above.</p> <p>Suppose that $x=0$ and $y=1$. At these points, every function in your algebra is equivalent. However this is, in fact, not a problem. The reason is that these points are basically equivalent with respect to your functions. Every periodic function (with period $1$) is equal at $0$ and $1$ so you really only needed to focus on $[0,1)$ anyway.</p>
1,368,815
<p>Let $U\subset\mathbb{R}^n$ be an open set. I proved that </p> <p>$$ H^0_{DR}(U):=\frac{\text{closed forms}}{\text{exact forms}}=\{f\in C^{\infty}(U):\,f\,\,\text{is locally constant} \} $$</p> <p>I have to show that $\dim H^0_{DR}=\text{number of connected components of}\,\, U$.</p> <p>Here is my incomplete proof.</p> <p>Let's write $U=\bigcup_i U_i$ where $U_i$ are the connected components of $U$.</p> <p>I have to prove that $f$ is constant on each $U_i$. How?</p> <p>Then it is sufficient to consider the set $$ \mathcal{B}=\{\chi_{U_i}\}_i $$ which is a basis for $H^0_{DR}(U)$.</p>
Michael Albanese
39,599
<p>A function $f : U \to \mathbb{R}$ is <em>locally constant</em> if for each $x \in U$, there is an open neighbourhood $V$ of $x$ such that $f|_V$ is constant. </p> <p>Note that for any $y \in \mathbb{R}$, $f^{-1}(y)$ is open, so for any $A\subseteq \mathbb{R}$, $f^{-1}(A) = \bigcup_{y\in A}f^{-1}(y)$ is open. In particular, $f^{-1}(A)$ is open for every open $A\subseteq \mathbb{R}$ so locally constant functions are continuous. </p> <p>By continuity, $f^{-1}(y)$ is closed. </p> <p>Now suppose that $U$ is connected. If $y \in f(U)$, then $f^{-1}(y)$ is non-empty and by the above it is both open and closed. As $U$ is connected, we have $f^{-1}(y) = U$. That is, $f$ is the constant function with value $y$.</p> <p>If $U$ is not connected, let $U_i$ be a connected component of $U$ and then the above argument shows that $f|_{U_i}$ is constant.</p>
962,301
<p>$$(\sqrt 2-\sqrt 1)+(\sqrt 3-\sqrt 2)+(\sqrt 4-\sqrt 3)+(\sqrt 5-\sqrt 4)…$$</p> <p>I have found partial sums equal to natural numbers. The first 3 addends sum to 1. The first 8 sum to 2. The first 15 sum to 3. When the minuend in an addend is the square root of a perfect square, the partial sum is a natural number. So I believe this series to be divergent.</p> <p>Am I right? Have I used correct terminology? How would this be expressed using sigma notation? Is there a proof that this series diverges?</p>
C.S.
95,894
<p>So your series can be written as $$\sum_{n=1}^{\infty} (\sqrt{n+1}-\sqrt{n}) =\sum_{n=1}^{\infty} \frac{1}{\sqrt{n+1}+\sqrt{n}} \geq \frac{1}{2}\sum_{n=1}^{\infty}\frac{1}{\sqrt{n+1}}$$ and so by <strong><a href="http://en.wikipedia.org/wiki/Direct_comparison_test" rel="nofollow">Comparison Test</a></strong> diverges</p>
52,194
<p>Assuming you have a set of nodes, how do you determine how many connections are needed to connect every node to every other node in the set?</p> <p>Example input and output:</p> <pre><code>In Out &lt;=1 0 2 1 3 3 4 6 5 10 6 15 </code></pre>
Xiang
13,405
<p>Here is what you want.$$\sum_{k=1}^{n-1}k=\frac{n(n-1)}{2}$$</p>
1,399,008
<p>I'm asked to prove that $$\lim_{n \to \infty}\left(\frac{n}{n^2+1}+\frac{n}{n^2+4}+\frac{n}{n^2+9}+\cdots+\frac{n}{n^2+n^2}\right)=\frac{\pi}{4}$$ This looks like it can be solved with Riemann sums, so I proceed:</p> <p>\begin{align*} \lim_{n \to \infty}\left(\frac{n}{n^2+1}+\frac{n}{n^2+4}+\frac{n}{n^2+9}+\cdots+\frac{n}{n^2+n^2}\right)&amp;=\lim_{n \to \infty} \sum_{k=1}^{n}\frac{n}{n^2+k^2}\\ &amp;=\lim_{n \to \infty} \sum_{k=1}^{n}(\frac{1}{n})(\frac{n^2}{n^2+k^2})\\ &amp;=\lim_{n \to \infty} \sum_{k=1}^{n}(\frac{1}{n})(\frac{1}{1+(k/n)^2})\\ &amp;=\lim_{n \to \infty} \sum_{k=1}^{n}f(\frac{k}{n})(\frac{k-(k-1)}{n})\\ &amp;=\int_{0}^{1}\frac{1}{1+x^2}dx=\frac{\pi}{4} \end{align*}</p> <p>where $f(x)=\frac{1}{1+x^2}$. Is this correct, are there any steps where I am not clear? </p>
davidlowryduda
9,754
<p>Yes, what you've written is correct.</p> <p>I think your write-up would be cleaner if you explicitly mentioned the antiderivative $$ \int \frac{1}{1 + x^2} dx = \arctan x,$$ which makes it completely clear where $\frac{\pi}{4}$ comes from at the end.</p>
13,478
<p>This may be a silly question - but are there interesting results about the invariant: the minimal size of an open affine cover? For example, can it be expressed in a nice way? Maybe under some additional hypotheses?</p>
David E Speyer
297
<p>I think there are some relevant comments in the work of Roth and Vakil, <em><a href="https://arxiv.org/abs/math/0406384" rel="nofollow noreferrer">The affine stratification number and the moduli space of curves</a></em>.</p>
2,839,823
<p>I don't know where to ask, but I'm trying. I just thing we cannot do hocus pocus methods.</p> <hr> <p>Solve $3^x+4^x=5^x$</p> <p>Okay, so my friend gave me this equation, and his solution. But I don't belive it holds. It comes here:</p> <p>"Solution: </p> <p>$3^x+4^x=5^x\Leftrightarrow \frac{3^x}{3^x}+\frac{4^x}{3^x}=\frac{5^x}{3^x}\Leftrightarrow 1+\left(\frac{4}{3}\right )^x=\left(\frac{5}{3}\right )^x\Leftrightarrow 1+\left(\frac{4}{3}\right )^{\frac{x}{2}\cdot 2}=\left(\frac{5}{3}\right )^{\frac{x}{2}\cdot 2}\Leftrightarrow \left(\frac{4}{3}\right )^{\frac{x}{2}\cdot 2}-\left(\frac{5}{3}\right )^{\frac{x}{2}\cdot 2}=-1\Leftrightarrow \left(\left(\frac{4}{3}\right)^{\frac{x}{2}}\right )^2-\left(\left(\frac{5}{3}\right )^{\frac{x}{2}}\right)^2=-1$</p> <p>Let $a=\left(\frac{4}{3}\right )^{\frac{x}{2}}$ and let $b=\left(\frac{5}{3}\right )^{\frac{x}{2}}$ then</p> <p>$a^2-b^2=\frac{-1}{3}\cdot 3 \Leftrightarrow (a-b)(a+b)=\frac{-1}{3}\cdot 3$</p> <p>Now $a-b=\frac{-1}{3}$ and $a+b=3$ solving the system of equations, we get $a=\frac{4}{3}$ and $b=\frac{5}{3}$ hence we put back.</p> <p>$a=\left(\frac{4}{3}\right )^{\frac{x}{2}} \Rightarrow \frac{4}{3}=\left(\frac{4}{3}\right )^{\frac{x}{2}} $ and $b=\left(\frac{5}{3}\right )^{\frac{x}{2}} \Rightarrow \frac{5}{3}=\left(\frac{5}{3}\right )^{\frac{x}{2}} $ we see that $x=2$ in both cases, which satisfies the equation."</p>
Alan
175,602
<p>If you're asking is this the only possible solution, the logical breakdown I see is at the step where he insists on the particular breakdown of -1 into that product. As far as I can see, you could do the same with any other two factors of -1. Not sure if that will always end up with x=2 or not, but it's an area to investigate.</p>
1,576,235
<p>Take, for example: $3x + y + 2z = 6$.</p> <p>Parameterized as: $ui + (6 -3u - 2v)j + vk$ restricted to the first octant</p>
Ross Millikan
1,827
<p>A plane is closed. A half-space is closed. Finite intersections of closed sets are closed. So as a subset of $\Bbb R^3$ it is closed. If your definition of a closed surface is as in <a href="https://en.wikipedia.org/wiki/Surface#Closed_surfaces" rel="nofollow">Wikipedia</a>, a surface that is compact and without boundary, it is neither. It is not compact because it extends to infinity, and it has a boundary. ,</p>
342,684
<p>It would be useful to me to have a result of the following kind (which I would need to generalize, but this case is already interesting). Let <span class="math-container">$r&lt;n$</span> be positive integers and let <span class="math-container">$\delta&gt;0$</span> be a fixed constant such as 1/100. Does there exist a subspace <span class="math-container">$V$</span> of <span class="math-container">$\mathbb F_2^n$</span> that is a <span class="math-container">$\delta r$</span>-separated <span class="math-container">$r$</span>-net? That is, I would like every vector to have Hamming distance at most <span class="math-container">$r$</span> from some <span class="math-container">$v\in V$</span> and any two distinct vectors in <span class="math-container">$V$</span> to have distance at least <span class="math-container">$\delta r$</span> from each other. </p> <p>This feels like the kind of thing that coding theorists ought to know about, but what comes up when I do a Google search is more to do with the dimension of <span class="math-container">$V$</span> than with whether it is a good net. But perhaps it comes into the "known to be hard" category.</p> <p>I am interested in more or less the full range of <span class="math-container">$r$</span> for given <span class="math-container">$n$</span>: the existence of such a linear code for some specific <span class="math-container">$r$</span> would be interesting but not enough for my eventual purposes.</p>
Jyrki Lahtonen
15,503
<p>As an example I describe what is known about the BCH-codes for the purposes of this question. Just to give an idea of the type of results that are known. I suspect that they are not very useful for the full range of parameters here.</p> <p>Let us specify a natural number <span class="math-container">$m$</span>, and let <span class="math-container">$n=2^m-1$</span>. Let us further specify the parameter <span class="math-container">$t$</span> (=the number of bit errors the code can correct). The BCH-code (more accurately, a narrow sense, primitive BCH-code) with <em>designed minimum distance</em> <span class="math-container">$d_{des}=2t+1$</span> is a vector space <span class="math-container">$V(m,t)\subset \Bbb{F}_2^n$</span> has minimum Hamming distance <span class="math-container">$d_{min}\ge d_{des}$</span> and dimension <span class="math-container">$$k(m,t)=\dim V(m,t)\ge n-mt.$$</span> The other parameter of interest here is the <em>covering radius</em> <span class="math-container">$\rho(m,t)$</span> of <span class="math-container">$V(m,t)$</span>. That is, the smallest integer <span class="math-container">$\rho$</span> such that every point of <span class="math-container">$\Bbb{F}_2^n$</span> is within Hamming distance <span class="math-container">$\rho$</span> of a vector from the subspace <span class="math-container">$V(m,t)$</span>. Quite a bit is known about <span class="math-container">$\rho(m,t)$</span>. This revolves around the solvability of systems of equations over <span class="math-container">$\Bbb{F}_{2^m}$</span>.</p> <ul> <li>In 1987 Tietäväinen proved that <span class="math-container">$\rho(m,t)\le 2t$</span> whenever <span class="math-container">$(2t-1)^{4t+2}\le 2^m$</span>. This places a rather high demand on <span class="math-container">$m$</span>, and that may prove to be the downfall of the use of BCH-codes here.</li> <li>In most cases (and also asymptotically) the true covering radius is <span class="math-container">$\rho=2t-1$</span>, a result of Vladuts &amp; Skorobogatov. The length from which on this holds has been brought down by a number of authors since. </li> <li>Usually <span class="math-container">$\rho\ge2t-1$</span> follows from the so called supercode lemma. That is easy to describe. We always have <span class="math-container">$V(m,t)\subseteq V(m,t-1)$</span>. Whenever the inclusion here is strict, and the code <span class="math-container">$V(m,t-1)$</span> contains a vector <span class="math-container">$x$</span> of Hamming weight <span class="math-container">$2t-1$</span> (both of these are true often enough), then <span class="math-container">$x$</span> obviously cannot be at a distance <span class="math-container">$&lt;2t-1$</span> from a vector of <span class="math-container">$V(m,t)$</span>, proving the claim.</li> <li>A bit of searching pointed me to a paper by F. Levy-dit-Vehel and S. Litsyn that appeared in IEEE Transactions on Information Theory, vol. 42(3) in May '96. They described the known results at that time.</li> <li>So for BCH-codes the covering radius is often relatively close to the minimum distance (if it were higher, we could make the code bigger). Meaning that your parameter <span class="math-container">$\delta$</span> can be made close to <span class="math-container">$1$</span>.</li> <li>The catch is that the BCH-codes are known to be "asymptotically bad". From the inequality of the dimension we see that <span class="math-container">$t$</span> is usually bounded to be at most something like <span class="math-container">$n/\log_2n$</span>. Also the bounds on <span class="math-container">$\rho$</span> don't apply to very small BCH-codes.</li> </ul> <p>Another family of codes whose covering radii have been studied extensively consists of the Reed-Muller codes. The first order Reed-Muller codes in particular could be useful for your purpose.</p> <blockquote> <p>The main reason you may not have found a lot of information is that for coding theoretical purposes, whenever <span class="math-container">$\rho&gt;d_{min}$</span> the code becomes immediately non-interesting. I guess that's the content of Ilya Bogdanov's answer in different language.</p> </blockquote>
142,007
<p>Assume: $p$ is a prime that satisfies $p \equiv 3 \pmod 4$</p> <p>Show: $x^{2} \equiv -1 \pmod p$ has no solutions $\forall x \in \mathbb{Z}$.</p> <p>I know this problem has something to do with Fermat's Little Theorem, that $a^{p-1} \equiv 1\pmod p$. I tried to do a proof by contradiction, assuming the conclusion and showing some contradiction but just ran into a wall. Any help would be greatly appreciated.</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$\bmod P\! = 4K\!+\!3\!:\;$</span> <span class="math-container">$\ \color{#c00}{X^{\large 2} \equiv -1} \;\Rightarrow\; 1\equiv X^{\large P-1} \equiv (\color{#c00}{X^{\large 2}})^{\large 2K+1}\equiv (\color{#c00}{-1})^{\large 2K+1} \equiv -1$</span></p> <p><strong>Alternatively</strong> <span class="math-container">$\ X^{\large 4}\equiv 1\equiv X^{\large 4K+2}\Rightarrow\, 1 \equiv X^{\large \,\!(4,\,4K+2)}\equiv X^{\large 2}\equiv -1\,\Rightarrow\, P\mid2\, \Rightarrow\!\Leftarrow$</span></p> <p><strong>Remark</strong> <span class="math-container">$ $</span> The proof is a special case of <a href="https://en.wikipedia.org/wiki/Euler%27s_criterion" rel="nofollow noreferrer">Euler's Criterion</a>.</p> <p>For a converse, and a group-theoretic viewpoint, <a href="https://math.stackexchange.com/a/4878/242">see here.</a></p>
142,007
<p>Assume: $p$ is a prime that satisfies $p \equiv 3 \pmod 4$</p> <p>Show: $x^{2} \equiv -1 \pmod p$ has no solutions $\forall x \in \mathbb{Z}$.</p> <p>I know this problem has something to do with Fermat's Little Theorem, that $a^{p-1} \equiv 1\pmod p$. I tried to do a proof by contradiction, assuming the conclusion and showing some contradiction but just ran into a wall. Any help would be greatly appreciated.</p>
Oliver Kayende
704,766
<p>A solution to the given congruence would imply there exists an element of multiplicative order 4 in the finite field of size p which is false because (p-1)/2 is odd and therefore p-1 is not divisible by 4.</p>
142,007
<p>Assume: $p$ is a prime that satisfies $p \equiv 3 \pmod 4$</p> <p>Show: $x^{2} \equiv -1 \pmod p$ has no solutions $\forall x \in \mathbb{Z}$.</p> <p>I know this problem has something to do with Fermat's Little Theorem, that $a^{p-1} \equiv 1\pmod p$. I tried to do a proof by contradiction, assuming the conclusion and showing some contradiction but just ran into a wall. Any help would be greatly appreciated.</p>
matqkks
30,379
<p>Isn't this easier than using FLT. Consider a solution x then x is even or odd. Substituting this into x^2=-1 (mod p) leads to a contradiction in both cases.</p>
185,900
<p>What is the <strong>fastest way</strong> to find the smallest positive root of the following transcendental equation:</p> <p><span class="math-container">$$a + b\cdot e^{-0.045 t} = n \sin(t) - m \cos(t)$$</span></p> <pre><code>eq = a + b E^(-0.045 t) == n Sin[t] - m Cos[t]; </code></pre> <p>where <span class="math-container">$a,b,n,m$</span> are some real constants.</p> <p>for instance I tried :</p> <pre><code>eq = 5 E^(-0.045 t) + 0.1 == -0.3 Cos[t] + 0.009 Sin[t]; sol = FindRoot[eq, {t, 1}] {t -&gt; 117.349} </code></pre> <p>There is an answer but it doesn't mean that this is the smallest positive root ))</p> <p>I don't like <code>FindRoot[]</code> because you need starting point, for different initial parameters <span class="math-container">$(a,b,n,m)$</span></p> <p>Is there a way to find the <strong>smallest positive root</strong> of equation for any <span class="math-container">$(a,b,n,m)$</span> (if there exist the solution), without <em>starting points</em>?</p> <p>If No. how to determine automatically starting point for a given parameters?</p> <p>there is a numerical and graphical answers in <em>Wolfram Alpha</em></p> <p><a href="https://i.stack.imgur.com/Y90wg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y90wg.jpg" alt="enter image description here"></a></p>
Michael E2
4,999
<p>You can solve by hand for to isolate the root. The following identity is helpful:</p> <pre><code>n Sin[t] - m Cos[t] == Sqrt[m^2 + n^2] Cos[t - ArcTan[-m, n]] </code></pre> <p>Then you can figure out that the earliest to start your search is here:</p> <pre><code>Solve[a + b E^(-45/1000 t) == Sqrt[m^2 + n^2], t, Reals] </code></pre> <p>Then you just have to deal with the different cases (intersects one to three times in the first period where there are intersections):</p> <pre><code>Block[{b = 5, a = 0.1, m = 0.3, n = 0.009}, (* Assumes b &gt; 0 and a &lt; Sqrt[m^2+n^2] *) Module[{tmax, t0, t1, tcrit, t2, y0, y2}, tmax = Mod[ArcTan[-m, n], 2 Pi]; t0 = 200/9 Log[b/(-a + Sqrt[m^2 + n^2])]; t1 = Floor[t0 - tmax, 2 Pi] + tmax; tcrit = Mod[ArcSin[Min[(9 b E^(-9 t1/200))/(200 Sqrt[m^2 + n^2]), 1]] + 0 ArcTan[-m, n], 2 Pi]; t2 = t1 + tcrit; With[{y = Subtract @@ eq}, y0 = y /. t -&gt; t0; y2 = y /. t -&gt; t2 ]; Which[ y0 == 0, {t -&gt; t0}, y2 == 0, {t -&gt; t2}, y0*y2 &lt; 0, FindRoot[eq, {t, (t0 + t2)/2, t0, t2}], True, FindRoot[eq, {t, (t1 + 3 Pi/2), t1 + Pi, t1 + 2 Pi}] ] ]] (* {t -&gt; 72.0486} *) </code></pre> <p>For <code>{b = 5, a = 0.0775, m = -0.3, n = 0.009}</code>, we get</p> <pre><code>{t -&gt; 69.1481} </code></pre> <p>For <code>{b = 1, a = 0, m = -1, n = 0}</code>, we get</p> <pre><code>{t -&gt; 0} </code></pre> <p>The other measure-zero case is harder to achieve.</p> <p>Hopefully I didn't make any fence-post type errors. :)</p>
1,195,166
<p>for a>0 can someone give me some clues to prove this inequality. </p> <p><img src="https://i.stack.imgur.com/WFQA2.png" alt="enter image description here"></p> <p>I found its derivative but I can't express the f(0) to use the Mean Value Theorem. Can you give me at least a point where to start at</p>
Mark Bennet
2,906
<p>You should look at it as follows:</p> <p>$$\sqrt {x+\delta x}-\sqrt x =\left(\sqrt {x+\delta x}-\sqrt x\right) \cdot \frac {\sqrt {x+\delta x}+\sqrt x }{\sqrt {x+\delta x}+\sqrt x}=\frac {\delta x}{\sqrt {x+\delta x}+\sqrt x }$$</p> <p>Now the denominator is "large" compared with $\delta x$ and you don't need to get rid of the square root.</p> <p>It looks as though you have done the equivalent of $(a-b)^2=a^2-b^2$, which is not true, and what you need instead is $(a-b)(a+b)=a^2-b^2$, as I have used above.</p>
643,658
<p>Let <strong><em>K</em></strong> be a finite field, $F=K(\alpha)$ a finite simple extension of degree $n$, and $ f \in K[x]$ the minimal polynomial of $\alpha$ over $K$. Let $\frac{f\left( x \right)}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}x+\cdots +{{\beta }_{n-1}}{{x}^{n-1}}\in F[x]$ and $\gamma={f}'\left( \alpha \right)$.</p> <p>Prove that the dual basis of $\left\{ 1,\alpha ,\cdots ,{{\alpha }^{n-1}} \right\}$ is $\left\{ {{\beta }_{0}}{{\gamma }^{-1}},{{\beta }_{1}}{{\gamma }^{-1}},\cdots ,{{\beta }_{n-1}}{{\gamma }^{-1}} \right\}$.</p> <p>I met this exercise in "Finite Fields" Lidl &amp; Niederreiter Exercises 2.40, and I do not how to calculate by Definition 2.30. It is</p> <p>Definition 2.30 Let $K$ be a finite field and $F$ a finite extension of $K$. Then two bases $\left\{ {{\alpha }_{1}},{{\alpha }_{2}},\cdots ,{{\alpha }_{m}} \right\}$ and $\left\{ {{\beta }_{1}},{{\beta }_{2}},\cdots ,{{\beta }_{m}} \right\}$ of $ F$ over $K$ are said to be dual bases if for $1\le i,j\le m$ we have $T{{r}_{{F}/{K}\;}}\left( {{\alpha }_{i}}{{\beta }_{j}} \right)=\left\{ \begin{align} &amp; 0\;\;\text{for}\;\;i\neq j, \\ &amp; 1\;\;\text{for}\;\;i=j. \\ \end{align} \right.$</p> <p>I think $\gamma =\underset{x\to \alpha }{\mathop{\lim }}\,\frac{f(x)-f{{(\alpha )}_{=0}}}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}\alpha +\cdots {{\beta }_{n-1}}{{\alpha }^{n-1}}$.</p> <p>How can I continue? The lecturer did not teach the "dual bases" section.</p>
Jyrki Lahtonen
11,619
<p>This is a standard fact of algebraic field extensions, and doesn't need the assumption that $K$ is finite.</p> <p>Let $f(x)=\prod_{i=1}^n(x-\alpha_i),$ where $\alpha_1=\alpha$, so the conjugates of $\alpha$ are $\alpha_i, i=1,2,\ldots,n$. Let $0\le r\le n-1$. Consider the polynomial $$ g_r(x)=\sum_{i=1}^n\frac{f(x)}{x-\alpha_i}\cdot\frac{\alpha_i^r}{f'(\alpha_i)}. $$ If $i\neq j$, then $f(x)/(x-\alpha_i)$ evaluated at $\alpha_j$ is obviously zero. OTOH, if $i=j$, then $f(x)/(x-\alpha_i)$ evaluated at $\alpha_j=\alpha_i$ is equal to $f'(\alpha_i)$. Thus we conclude that $g_r(\alpha_i)=\alpha_i^r$ for all $i=1,2,\ldots,n$.</p> <p>But $g_r(x)$ is clearly of degree $\le n-1$, and the polynomial $g_r(x)-x^r$ has $n$ distinct zeros, namely $x=\alpha_i, 1\le i\le n$. This is possible only, if $g_r(x)=x^r$.</p> <p>Let us extend the definition of the Galois group actioin and the trace from $F$ to $F[x]$ by declaring that $$ \sigma(\sum_ic_ix^i)=\sum_i\sigma(c_i)x^i, $$ for all the automorphisms $\sigma$ (if $F/K$ were not Galois, we would use the various embeddings of $F$ into an algebraic closure), and then declare that $$ tr(\sum_i c_ix^i)=\sum_{\sigma}\sigma(\sum_i c_ix^i)=\sum_i tr(c_i)x^i, $$ i.e. we let the Galois group and the trace act on the coefficients. </p> <p>We are nearly done, because we can now conclude that $$ tr\left(\frac{f(x)\alpha^r}{(x-\alpha)f'(\alpha)}\right)=\sum_{i=1}^n\frac{f(x)\alpha_i^r}{(x-\alpha_i)f'(\alpha_i)}=g_r(x)=x^r.\qquad(1) $$ To verify the first equality above let the Galois group act on $f(x)/(x-\alpha)$. On the other hand $$ tr\left(\frac{f(x)\alpha^r}{(x-\alpha)f'(\alpha)}\right)= \sum_{i=0}^{n-1}tr\left(\frac{\beta_i\alpha^r}{f'(\alpha)}\right)x^i.\qquad(2) $$ Equating the coefficients of like powers of $x$ in both $(1)$ and $(2)$ for all $i$ and all $r$ gives you the claim: $$ tr\left(\frac{\beta_i\alpha^r}{f'(\alpha)}\right)=\delta_{ir}. $$</p>
459,124
<p>Title says it all, I have an itch about series like this that seem to fall in the gray area where classical proofs that rational partial sums that converge too quickly must converge to transcendental don't seem to apply. This is such a special case, does any one know if the series</p> <p>$$\sum_{i=0}^{\infty} \frac1{k^{2^i}}$$</p> <p>converges to algebraic or transcendental, perhaps the answer/lack of an answer depending on the choice of integer $k &gt; 1$?</p>
Theon Alexander
165,460
<p>It follows from Liouville's theorem. The partial sums converge too well.</p> <p><a href="http://www-users.math.umn.edu/~garrett/m/mfms/notes_2013-14/04b_Liouville_approx.pdf" rel="nofollow">http://www-users.math.umn.edu/~garrett/m/mfms/notes_2013-14/04b_Liouville_approx.pdf</a></p> <p>Assume that it is algebraic of degree $n$, and then apply the theorem.</p>
3,629,790
<p>So I am a bit stuck on applying the limit definition for this problem. This is what I have so far: </p> <p><span class="math-container">$\lvert x - c \rvert &lt; \delta $</span> then <span class="math-container">$\lvert f(x) - l \rvert &lt; \epsilon$$ $$\lvert x + 2 \rvert &lt; \delta$</span> <span class="math-container">$$\lvert x^2 + x -2 \rvert &lt; \epsilon $$</span> </p> <p>which means <span class="math-container">$\lvert x -1 \rvert \lvert x + 2 \rvert &lt; \epsilon $</span> </p> <p><span class="math-container">$\lvert x + 2 \rvert &lt; {\epsilon\over \lvert x -1 \rvert} $</span></p> <p>let <span class="math-container">$\delta = 1 $</span> then: <span class="math-container">$\lvert x + 2 \rvert &lt; \delta &lt; 1 $</span> which means <span class="math-container">$-1 &lt; x -1 &lt; 1 $</span> <span class="math-container">$-4 &lt; x -1 &lt; -2 $</span> Thus the upper bound of <span class="math-container">$\lvert x -1 \rvert &lt; -2 $</span> is <span class="math-container">$-2$</span></p> <p>let <span class="math-container">$$\delta = \min(1, {\epsilon\over -2}) $$</span></p> <p>But what is confusing me here is the fact that, how is <span class="math-container">$ \lvert x + 2 \rvert &lt; {\epsilon\over -2} $</span></p> <p>Because if <span class="math-container">$\epsilon$</span> is any number more than <span class="math-container">$0$</span>, then <span class="math-container">${\epsilon\over -2} $</span> will be negative and it can't be greater than <span class="math-container">$ \lvert x + 2 \rvert $</span>. </p>
John Titor
770,201
<p>If <span class="math-container">$0&lt;ε&lt;2+$$\frac{1}4$$,then: |f(x)-l|&lt;ε$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$|x^2+x-5+3|&lt;ε$</span> <span class="math-container">$\Rightarrow$$-ε&lt;x^2+x-2&lt;ε$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$(0&lt;x^2+x-(2-ε)) \land (x^2+x-(2+ε)&lt;0)$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$(x&lt;\frac{-1-\sqrt{1+4(2-ε)}}2,\frac{-1+\sqrt{1+4(2-ε)}}2&lt;x ) \land (\frac{-1-\sqrt{1+4(2+ε)}}2&lt;x&lt;\frac{-1+\sqrt{1+4(2+ε)}}2)$</span></p> <p>By combining the inequalities we acquire two intervals containig 1 and -2 respectively.We are interested in the interval that for every 0&lt;ε&lt;2+1/4 contains -2, and then you are done.You can fill in the details. If <span class="math-container">$ε&gt;2+$$\frac{1}4$$,then: -ε&lt;x^2+x-2 $</span> holds true for every ε, and the other inequality simplifies to an interval that contains both 1 and -2.So you do basically the same work with the first case but you only have one interval to work with. I hope you found this helpful. I thank Martin Sleziak for the help with the edit.</p>
404,954
<p>I've never seen phrases like "$\sqrt{5}$ people" or "a set with $\pi$ many elements". Are there sets with cardinality, say, $\frac{1}{2}$?</p> <p><strong>Edit:</strong> As Brian M. Scott pointed out, the only real numbers that are cardinalities of sets are the non-negative integers. Could you explain why is it so?</p>
MJD
25,554
<p>I am copying Brian M. Scott's comment here, because I think it answers the question completely and correctly:</p> <blockquote> <p>As the term <em>cardinality</em> is normally used, the only real numbers that are cardinalities of sets are the non-negative integers. The cardinalities of infinite sets, of course, are not natural numbers.</p> </blockquote>
298,121
<p>I was watching a video on functional programming and the speaker was introducing functional programming notation for functions:</p> <p>". . . if <strong>g(a)</strong> is a function <strong>g</strong> with variable <strong>a</strong>, we can write it also as <strong>g a</strong> . . . now, this looks like <strong>g</strong> <em>times</em> <strong>a</strong> and this would still be true if <strong>g</strong> were a linear equation . . ."</p> <p>What does he mean by this? Why is <strong>g(a)</strong> the same as <strong>g</strong> times <strong>a</strong> if <strong>g</strong> is a linear equation? Or am I misinterpreting him? Here's the link: <a href="http://youtu.be/ZhuHCtR3xq8" rel="nofollow">http://youtu.be/ZhuHCtR3xq8</a><a href="http://youtu.be/ZhuHCtR3xq8" rel="nofollow"></a> (Starting at ~12:30)</p>
Omar Antolín-Camarena
1,070
<p>Linear functions are often represented by matrices, in which case function application is the the same as multiplication of a matrix by a vector. If $f$ is a linear function, say, from $\mathbb{R}^m$ to $\mathbb{R}^n$, then there is an $n \times m$ matrix $A$ representing the function $f$ so that for any $v$ in $\mathbb{R}^m$, $f(v) = A v$. (For the multiplication, regard $v$ as a column vector with $m$ entries.)</p>
412,247
<p>Let $X$ be a topological space. $X$ is connected if $X\neq U\cup V$ with open sets $U,V$ and $U\cap V=\emptyset$.</p> <p>If you consider $A:=(0,1]\cup(2,3)\subset\mathbb R$, $A$ is not connected.</p> <p>But how can you prove it? Clearly I have to find those open sets like above but how? Thanks!</p>
Brian M. Scott
12,042
<p>Just take $U=(0,1]$ and $V=(2,3)$: those sets are both open in $A$. To see that $U$ is open in $A$, observe that it’s equal to $A\cap(0,2)$, where $(0,2)$ is open in $\Bbb R$.</p>
412,247
<p>Let $X$ be a topological space. $X$ is connected if $X\neq U\cup V$ with open sets $U,V$ and $U\cap V=\emptyset$.</p> <p>If you consider $A:=(0,1]\cup(2,3)\subset\mathbb R$, $A$ is not connected.</p> <p>But how can you prove it? Clearly I have to find those open sets like above but how? Thanks!</p>
Asaf Karagila
622
<p>Note that in $A$ the interval $(0,1]$ is open. It is open because $(0,1]=(0,1.1)\cap A$, so it is relatively open.</p> <p>Therefore $A$ is the union of two nontrivial open sets, and therefore not connected.</p>
1,179,922
<p>I would like to inquire whether there is a simple way to prove that a function is decreasing or not. For example how would I prove that the function</p> <p>$$Y = (X^.5 - 1)/0.5$$ </p> <p>is decreasing? I am not sure whether the negativity of the 2nd derivative solves this and would really like to understand the intuition better. </p>
Emily
31,475
<p>Intuitively, let's it's easier to go directly from the definition. Note that $\frac{(x^{1/2}-1)}{1/2} = 2 x^{1/2} - 2$. Note that $-2$ is constant. Now, consider $x_2 &gt; x_1$ for any $x_2, x_1$ and compare $2x_2^{1/2}$ to $2x_1^{1/2}$:</p> <p>$$\frac{2x_2^{1/2}}{2x_1^{1/2}} = \left(\frac{x_2}{x_1}\right)^{1/2}.$$</p> <p>Since $x_2 &gt; x_1$, we have</p> <p>$$\left(\frac{x_2}{x_1}\right)^{1/2} &gt; 1.$$</p> <p>So starting at any point in the domain ($x_1$), if we go any distance to the right (to $x_2$), the function gets bigger. Now, simply show that it gets bigger without bound (in other words, it is not increasing to a limit of zero).</p>
352,501
<p>There are several things that confuse me about this proof, so I was wondering if anybody could clarify them for me.</p> <blockquote> <p><strong>Lemma</strong> Let <span class="math-container">$G$</span> be a group of order <span class="math-container">$30$</span>. Then the <span class="math-container">$5$</span>-Sylow subgroup of <span class="math-container">$G$</span> is normal.</p> </blockquote> <blockquote> <p><strong>Proof</strong> We argue by contradiction. Let <span class="math-container">$P_5$</span> be a <span class="math-container">$5$</span>-Sylow subgroup of <span class="math-container">$G$</span>. Then the number of conjugates of <span class="math-container">$P_5$</span> is congruent to <span class="math-container">$1 \bmod 5$</span> and divides <span class="math-container">$6$</span>. Thus, there must be six conjugates of <span class="math-container">$P_5$</span>. Since the number of conjugates is the index of the normalizer, we see that <span class="math-container">$N_G(P_5) = P_5$</span>.</p> </blockquote> <p>Why does the fact that the order of <span class="math-container">$N_G(P_5)$</span> is 5 mean that it is equal to <span class="math-container">$P_5$</span>?</p> <blockquote> <p>Since the <span class="math-container">$5$</span>-Sylow subgroups of <span class="math-container">$G$</span> have order <span class="math-container">$5$</span>, any two of them intersect in the identity element only. Thus, there are <span class="math-container">$6\cdot4 = 24$</span> elements in <span class="math-container">$G$</span> of order <span class="math-container">$5$</span>. This leaves <span class="math-container">$6$</span> elements whose order does not equal <span class="math-container">$5$</span>. We claim now that the <span class="math-container">$3$</span>-Sylow subgroup, <span class="math-container">$P_3$</span>, must be normal in <span class="math-container">$G$</span>.</p> </blockquote> <blockquote> <p>The number of conjugates of <span class="math-container">$P_3$</span> is congruent to <span class="math-container">$1 \bmod 3$</span> and divides <span class="math-container">$10$</span>. Thus, if <span class="math-container">$P_3$</span> is not normal, it must have <span class="math-container">$10$</span> conjugates. But this would give <span class="math-container">$20$</span> elements of order <span class="math-container">$3$</span> when there cannot be more than <span class="math-container">$6$</span> elements of order unequal to <span class="math-container">$5$</span> so that <span class="math-container">$P_3$</span> must indeed be normal.</p> </blockquote> <blockquote> <p>But then <span class="math-container">$P_5$</span> normalizes <span class="math-container">$P_3$</span>, and hence <span class="math-container">$P_5P_3$</span> is a subgroup of <span class="math-container">$G$</span>. Moreover, the Second Noether Theorem gives</p> </blockquote> <blockquote> <p><span class="math-container">$(P_5P_3)/P_3 \cong P_5/(P_5 \cap P_3)$</span>.</p> </blockquote> <blockquote> <p>But since <span class="math-container">$|P_5|$</span> and <span class="math-container">$|P_3|$</span> are relatively prime, <span class="math-container">$P_5 \cap P_3 = 1$</span>, and hence <span class="math-container">$P_5P_3$</span> must have order <span class="math-container">$15$</span>.</p> </blockquote> <p>Why do we need to use the second Noether theorem? Why can't we just use the formula <span class="math-container">$\frac{|P_5||P_3|}{|P_3 \cap P_5|}$</span> to compute the order?</p> <blockquote> <p>Thus, <span class="math-container">$P_5P_3 \cong Z_{15}$</span>, by Corollary <span class="math-container">$5.3.17$</span>. But then <span class="math-container">$P_5P_3$</span> normalizes <span class="math-container">$P_5$</span>, which contradicts the earlier statement that <span class="math-container">$N_G(P_5)$</span> has order <span class="math-container">$5$</span>.</p> </blockquote> <p>Why do we have to realize that <span class="math-container">$P_5P_3$</span> is isomorphic to <span class="math-container">$Z_{15}$</span>? Also, how can we conclude that <span class="math-container">$P_5P_3$</span> normalizes <span class="math-container">$P_5$</span>?</p> <p>Thanks in advance.</p>
user113578
113,578
<p>A group of order 30 has a normal 5-Sylow subgroup.</p> <p>$|G|=2.3.5$</p> <p>The divisors of $|G|$ are $1,2,3,5,6,10,15,30.$</p> <p>$5|n_5-1\implies n_5=1,6\\3|n_3-1\implies n_3=1,10\\2|n_2-1\implies n_2=1,3,5,15.$</p> <p>If possible let $n_5=6.$ Then $G$ has at least $5.6-5=25$ elements of order $5.$ Consequently since $|G|=30,n_3=n_2=1.$ Let $H_2,H_3$ be normal Sylow $2$ and Sylow $3$ subgroups of $G.$ Since $H_1H_2\le G$ and $H_2H_3=H_1\times H_2\simeq H_2\oplus H_3, $ $H_2H_3$ has $6$ elements none of which is of order $5.$ Thus $G$ has at least $25+6=31$ elements! So $n_5=1.$</p>
247,315
<p>Please see my Edited version at the end of the post.</p> <p>================================</p> <p><a href="http://en.m.wikipedia.org/wiki/Cantor_set" rel="nofollow">http://en.m.wikipedia.org/wiki/Cantor_set</a></p> <p>My definition of Cantor Set is just like that of wikipedia.</p> <p>That is, $C=[0,1]\setminus \bigcup_{i=1}^{\infty} \bigcup_{k=0}^{3^{i-1}-1} (\frac{3k+1}{3^i}, \frac{3k+2}{3^i})$.</p> <p>With this definition, i have shown that $C$ is compact, perfect, equipotent with $2^{\aleph_0}$ and contains no openset. (i.e. Basic properties of Cantor set)</p> <p>I preferred this definition to another since this definition is simple and strictly written in first-order logic.</p> <p>Let $C_n = [0,1]\setminus \bigcup_{i=1}^n \bigcup_{k=0}^{3^{i-1}-1} (\frac{3k+1}{3^i},\frac{3k+2}{3^i})$.</p> <p>Then $\bigcap_{n\in \mathbb{N}} C_n = C$.</p> <p>Here, how do i prove that $C_n$ is a disjoint union of $2^n$ intervals, each of length $3^{-n}$?</p> <p>(To make it clear, intervals here refer to closed connected sets)</p> <p>=========================== EDIT:</p> <p>This is not actually i meant, but this is exactly the same as what i wanted to prove anyway..</p> <p>Let $A_0=B_0=[0,1]$. Define $\{A_n\}$ recursively such as; $A_{n+1}=\frac{A_n}{3} \cup (\frac{2}{3} + \frac{A_n}{3})$.</p> <p>Now, define $B_n=[0,1]\setminus \bigcup_{i=1}^n \bigcup_{k=0}^{3^{i-1}-1} (\frac{3k+1}{3^i},\frac{3k+2}{3^i})$, $\forall n\in \mathbb{Z}^+$.</p> <p>How do i prove $A_n=B_n$?</p>
Thibaut Dumont
48,658
<p>I don't have enough reputation to post a comment, so i'll write it here.</p> <p>There's a little typo in the definition of the Cantor-set (first formula), the interval should be $\left(\frac{3k+1}{3^i},\frac{3k+2}{3^i}\right)$</p> <p>I think you can prove it by induction using the recursive formula of the wikipedia article : $C_n=\frac{C_{n-1}}{3}\cup\left(\frac{2}{3}+\frac{C_{n-1}}{3}\right)$ I hope this is helpful.</p> <p>EDIT : I was typing when other comments came up. So here are some details you need.</p> <p>By induction, assume that $C_{n-1}$ is the disjoint union of $2^{n-1}$ intervals in $[0,1]$ each of them with lenght $\frac{1}{3^{n-1}}$. </p> <p>On one hand, $A_n:=\frac{C_{n-1}}{3}$ is a subset of $[0,\frac{1}{3}]$ and is again a disjoint union of $2^{n-1}$ intervals but of lenght $\frac{1}{3}\cdot\frac{1}{3^{n-1}}=\frac{1}{3^{n}}$</p> <p>On the other hand, $B_n:=\frac{2}{3}+\frac{C_{n-1}}{3}$ is a subset of $[\frac{2}{3},1]$ and is as well a disjoint union of $2^{n-1}$ intervals of lenght $\frac{1}{3^{n}}$. </p> <p>Conclusion : $C_n$ is the union of $A_n$ and $B_n$ which is disjoint and hence composed of $2\cdot2^{n-1}=2^n$ intervals of the desired lenght.</p>
2,898,616
<p>My Math teacher told me that $\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$. I asked for a proof and he gave me the result by using the chain rule. I don't understand why this is the case for every function because the function $y=x^2$ doesn't have an inverse function. If this is the case then why is $\frac{dx}{dy}$ not $\frac{-1}{2x}$? Maybe it only works for injective functions or we assume only one of the values to be true.</p>
Arnaldo
391,612
<p>If $y=x^2$ then $$\frac{dy}{dx}=2x$$ and $\frac{dy}{dx}$ is given by using implicity derivative w.r.t $y$:</p> <p>$$1=2x\frac{dx}{dy}\to \frac{dx}{dy}=\frac{1}{2x}$$</p>
2,898,616
<p>My Math teacher told me that $\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$. I asked for a proof and he gave me the result by using the chain rule. I don't understand why this is the case for every function because the function $y=x^2$ doesn't have an inverse function. If this is the case then why is $\frac{dx}{dy}$ not $\frac{-1}{2x}$? Maybe it only works for injective functions or we assume only one of the values to be true.</p>
Michael Hardy
11,667
<p>There is such a thing as a function being <b>locally</b> one-to-one, and you frequently, at least tacitly, rely on that idea in many of the implicit differentiation problems that get assigned in first-semester calculus.</p> <p>To say that $f$ is locally one-to-one at a point $a$ in the domain of $f$ means that there is some open interval, say $(a-\varepsilon,a+\varepsilon)$ within the domain of $f,$ for which the restriction of $f$ to that interval is one-to-one. The formula you give applies to such restrictions. Since derivatives are an inherently local idea, this is not problematic.</p> <p>Notice that in implicit differentiation problems they give you something like $x^2 + 3y^2 = 1,$ and expect you to deduce that $2x+ 6y\dfrac{dy}{dx} =0,$ even though the equation does that implicitly defines $y$ as a function of $x$ does not have a unique solution for $x.$ You're just applying differentiation in a small neighborhood of each point on the curve.</p>
2,898,616
<p>My Math teacher told me that $\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$. I asked for a proof and he gave me the result by using the chain rule. I don't understand why this is the case for every function because the function $y=x^2$ doesn't have an inverse function. If this is the case then why is $\frac{dx}{dy}$ not $\frac{-1}{2x}$? Maybe it only works for injective functions or we assume only one of the values to be true.</p>
Christian Blatter
1,303
<p>The formula ${dx\over dy}={1\over dy/dx}$ as a general principle does not refer to a global situation, but to a window $W$ in the $(x,y)$-plane, centered at some point $(x_0,y_0)$, within which the "variables" $x$ and $y$ are dependent on each other in such a way that you can write $y=\phi(x)$, whereby $\phi(x_0)=y_0$, as well as $x=\psi(y)$, whereby $\psi(y_0)=x_0$. Assume that the points $(x,y)\in W$ satisfying $y=\phi(x)$, resp., $x=\psi(y)$ are lying on a curve $\gamma$ through the point $(x_0,y_0)$ whose tangent $t_0$ at $(x_0,y_0)$ is neither vertical nor horizontal. Inspecting a figure showing the window $W$ you can convince yourself that the slope of $t_0$ when viewed with respect to the $x$-axis, i.e., as tangent to $y=\phi(x)$, is $\phi'(x_0)$, and when viewed with respect to the $y$-axis, i.e., as tangent to $x=\psi(y)$ is $\psi'(y_0)$. It follows that $\psi'(y_0)={1\over\phi'(x_0)}$.</p>
2,198,462
<p>Let $\mathbf{f} : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be given by: $$\mathbf{f}(u, v) = (u − v,\sin(u) \sin(v)).$$</p> <p>Let $(y, z) = (0, 3/4).$ Find the point $(a, b)$ with $0 &lt; a &lt; \frac{\pi}{2}$ such that $\mathbf{f}(a, b)=(y, z)$.</p> <p>Then prove that the function $\mathbf{f}^{−1}$ such that $\mathbf{f}^{−1}(y, z) = (a, b)$ exists and is differentiable in some neighbourhood of $(y, z).$</p> <p>Im really not sure on both questions so any help will be appreciated.</p>
Arnaldo
391,612
<p><strong>Hint for the second question:</strong></p> <p>$$(a,b)=(u-v,\sin u \sin v)\\ u-v=a\to u=a+v \quad (1)\\ \sin u \sin v=b\to \sin (a+v)\sin v=b$$</p> <p>So, using sum-product relation</p> <p>$$\cos(a+2v)-\cos(a)=-2\sin(a+v)\sin v$$</p> <p>so,</p> <p>$$\cos(a+2v)-\cos(a)=-2b\to \cos(a+2v)=\cos(a)-2b\\ v=\frac{\arccos(\cos a-2b)-a}{2}\quad (2)$$</p> <p>Now check the function $$g(a,b)=\left(\frac{\arccos(\cos a-2b)+a}{2},\frac{\arccos(\cos a-2b)-a}{2}\right)$$</p> <p>How is $g(a,b)$?</p>
119,492
<p>First I define a function, just a sum of a few sin waves at different angular frequencies:</p> <pre><code>ubdat = 50; ws = 10*{2, 5, 10, 20, 40} fn = Table[Sum[Sin[w*x], {w, ws}], {x, 0, ubdat, .001}]; pts = Length@fn ListPlot[fn, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] {20, 50, 100, 200, 400} </code></pre> <p><a href="https://i.stack.imgur.com/ZCcaO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZCcaO.png" alt="enter image description here"></a></p> <p>If I take the Fourier transform and scale it correctly, you can see the correct peaks:</p> <pre><code>fnft = Abs@Fourier@fn; fnftnormed = Table[{2*Pi*i/ubdat, fnft[[i]]}, {i, Length@fnft}]; ListPlot[fnftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/adDJS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/adDJS.png" alt="enter image description here"></a></p> <p>Now, I want to do a low pass filter on it, for, say, $\omega_c=140$. This should get rid of the peaks at 200 and 400, ideally. Doing it this way returns the same plots as above:</p> <pre><code>fnfilt = LowpassFilter[fn, 140]; ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p>I assume the problem is something to do with defining SampleRate, but the documentation explaining how it's defined or how to use it on the <a href="https://reference.wolfram.com/language/ref/LowpassFilter.html" rel="noreferrer">LowpassFilter page</a> is <em>very</em> sparse:</p> <blockquote> <p>By default, SampleRate->1 is assumed for images as well as data. For a sampled sound object of sample rate of r, SampleRate->r is used. With SampleRate->r, the cutoff frequency should be between 0 and $r*\pi$.</p> </blockquote> <p>It appears to have a broken link at the bottom, so maybe that had something helpful. The page for SampleRate itself has even less info.</p> <p>My naive attempt at choosing a sample rate would be dividing the number of samples by the total range, so in this case, <code>Floor[pts/ubdat]=1000</code>. Using this <em>does</em> affect the FT, but not a whole lot:</p> <pre><code>fnfilt = LowpassFilter[fn, 140, SampleRate -&gt; 1000]; ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/XUqiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XUqiC.png" alt="enter image description here"></a></p> <p>So what am I missing? I've tried googling for some sort of guide on using filters in Mathematica, but I can't find anything and it's very frustrating.</p>
david
3,057
<p>I've found that specifying the kernel filter length in <code>LowpassFilter</code> will give the better performance than leaving it unspecified. I'll show what I came up with. Note that I've modified your problem somewhat. I use <code>Fourier</code> with the <code>FourierParameters</code> set for "signal processing". I renamed some of your variables for clarity. I also shortened the length of the sampled signal to 4096 samples. This shouldn't affect anything but the resolution of the frequency bins that result from calling Fourier. Then, I moved the frequency of one of the <code>Sin</code> functions from 100 rad/sec to 140 rad/sec. This makes it easier to see the effect of the filter at the cutoff frequency. All of the DFT frequency response plots use dB with the normalize frequency data. This makes it easier to see the relative drop off of the filter.</p> <p>Here's my redefinition for Fourier:</p> <pre><code>myFourier[v_ /; VectorQ[v] || MatrixQ[v]] := Fourier[v, FourierParameters -&gt; {1, -1}] </code></pre> <p>This generates the unfiltered signal and sets up the others variables:</p> <pre><code>signalDataLength = 4096; lpfCutoffAngular = 140; wSignal = 10*{2, 5, 14, 20, 40}; sampleRate = 1000.0; sampleRateAngular = 2 Pi sampleRate; unfilteredSignal = Table[Sum[Sin[w*(x/sampleRate)], {w, wSignal}], {x, 0,signalDataLength - 1}]; ListLinePlot[unfilteredSignal, PlotRange -&gt; {{0, 512}, All},AxesLabel -&gt; "Unfiltered Signal"] </code></pre> <p><a href="https://i.stack.imgur.com/t5Y11.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t5Y11.png" alt="Unfiltered Signal"></a></p> <p>Here's the normalized DFT of the unfiltered signal:</p> <pre><code>unfilteredDFT = Abs[myFourier[unfilteredSignal]]; angularFreqAxis = sampleRateAngular Range[0, signalDataLength/2 - 1]/signalDataLength; unfilteredData = Transpose@{angularFreqAxis, 20 Log10[unfilteredDFT[[;; signalDataLength/2]]/Max[unfilteredDFT]]}; ListLinePlot[unfilteredData, PlotRange -&gt; {{0, sampleRateAngular/10}, {-50, 0}}, AxesLabel -&gt; {"rad/sec", "dB"}] </code></pre> <p><a href="https://i.stack.imgur.com/TbSaR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TbSaR.png" alt="Unfiltered DFT"></a></p> <p>Now, select the filter kernel length. I ended up using 256 since it gave a the filter a good match to your requirements of a cutoff of 140 rad/sec:</p> <pre><code>filterKernelLength = 256; lpfSignal = LowpassFilter[unfilteredSignal, lpfCutoffAngular, filterKernelLength, SampleRate -&gt; Round[sampleRate]]; lpfSignalDFT = Abs[myFourier[lpfSignal]]; lpfSignalData = Transpose@{angularFreqAxis, 20 Log10[lpfSignalDFT[[;; signalDataLength/2]]/Max[lpfSignalDFT]]}; ListLinePlot[{unfilteredData, lpfSignalData}, PlotRange -&gt; {{0, sampleRateAngular/10}, {-50, 0}}, Joined -&gt; True, Epilog -&gt; {Red, Line[{{0, -3}, {lpfCutoffAngular, -3}, {lpfCutoffAngular, -100}}]}, PlotLegends -&gt; {"Unfiltered", "Low Pass Filtered"}, AxesLabel -&gt; {"rad/sec", "dB"}] </code></pre> <p><a href="https://i.stack.imgur.com/FBoQe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FBoQe.png" alt="Unfiltered and Filtered DFT&#39;s"></a></p> <p>You should be able to see that the filtered response at 200 and 400 rad/sec is significantly attenuated.</p> <p>Here's a plot of a portion of the unfiltered and filtered signals:</p> <pre><code>ListLinePlot[{unfilteredSignal, lpfSignal}, PlotRange -&gt; {{0, 512}, ll}, PlotLegends -&gt; {"Unfiltered", "Low Pass Filtered"}] </code></pre> <p><a href="https://i.stack.imgur.com/1dWVV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1dWVV.png" alt="enter image description here"></a></p> <p>To look at the filter's impulse response, I did the following:</p> <pre><code>impulseData = SparseArray[{{signalDataLength/2} -&gt; 1}, {signalDataLength}]; impResTS = TimeSeries[ impulseData, { Range[0, signalDataLength - 1]/sampleRate}]; impulseResponse = LowpassFilter[impResTS, Quantity[140, "Radians")/("Seconds")], filterKernelLength]; ListLinePlot[impulseResponse, PlotRange -&gt; All, PlotLabel -&gt; "Low Pass Filter Impulse Response"] </code></pre> <p><a href="https://i.stack.imgur.com/F6DKv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F6DKv.png" alt="Low Pass Filter Impulse Response"></a></p> <pre><code>impulseResponseDFT = Abs[myFourier[impulseResponse["Values"]]]; impulseResponseDFTData = Transpose@{angularFreqAxis, 20 Log10[impulseResponseDFT[[;; signalDataLength/2]]/Max[impulseResponseDFT]]}; ListLinePlot[tData, Joined -&gt; True, PlotRange -&gt; {{0, sampleRateAngular/10}, {-80, 0}}, Epilog -&gt; {Red, Line[{{0, -3}, {lpfCutoffAngular, -3}, {lpfCutoffAngular, -100}}]}, AxesLabel -&gt; {"rad/sec", "dB"}, PlotLabel -&gt; "Low Pass Filter Frequency Response"] </code></pre> <p>The filter cutoff is slightly more the 3 dB down at 140 rad/sec. That should meet your requirements.</p> <p><a href="https://i.stack.imgur.com/bHyrr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bHyrr.png" alt="Low Pass Filter Impulse Frequency Response"></a></p>
396,363
<p>I have two random variables $X,Y$ which are independent and uniformly distributed on $(\frac{1}{2},1]$. Then I consider two more random variables, $D=|X-Y|$ and $Z=\log\frac{X}{Y}$. I would like to calculate both, the disitrbution functions $F_D(t), F_Z(t)$ and the the density functions $f_D(t),f_Z(t)$</p> <p>To do that I think the first thing we need to do is to evaluate the density of the common distribution of $X$ and $Y$, but I do not know how to do that.</p> <p>The only thing which is clear to me is the density and distribution function of $X$ and $Y$ because we know that they are uniform.</p> <p><strong>EDIT</strong>: Please read my own answer to this question. I need someone who can show me my claculation mistakes.</p>
alexlo
47,612
<p>Let's try for $D = |X-Y|$: We have, for $t\geq 0$, $$F_D(t) = P[D\leq t] = P[|X-Y|\leq t] = P[-t\leq X-Y \leq t] = P[-t+Y\leq X \leq t+Y] =\int_\mathbb{R}P[-t+y\leq X \leq t+y | Y=y]f_Y(y)dy =\int_\mathbb{R} P[-t+y\leq X \leq t+y]f_Y(y)dy= \\ = \int_\mathbb{R}\int_{-t+y}^{t+y} f_X(x)dx f_Y(y)dy= \dots$$ And you can calculate the rest using what is known about $X$ and $Y$. Note that we needed the independence for this calculation! Of course, $F_D(t) = 0$ for $t&lt;0$. Now the density function:</p> <p>$$f_D(t)=P[D=t]=P[|X-Y|=t]=\int\int_{\{(x,y);|x-y|=t\}}f_{XY}(x,y)dxdy = \\=\int\int_{\{(x,y);|x-y|=t\}}f_{X}(x)\cdot f_{Y}(y)dxdy = ...$$</p> <p>And you can again calculate this. Note that we have used the independence again to write the joint density as the product of the densities.</p> <p>You should be able to deal with $Z$ in an analogous fashion.</p>
3,107,194
<p>Suppose <span class="math-container">$X$</span> is a compact metric space and <span class="math-container">$f: X \rightarrow \mathbb{R}$</span> is a function for which <span class="math-container">$f^{-1}([t,\infty))$</span> is closed for any real <span class="math-container">$t$</span>. Then <span class="math-container">$f$</span> achieves its maximum value on <span class="math-container">$X$</span>. </p> <p>I am first trying to prove that that <span class="math-container">$f$</span> is bounded above. Suppose, for the sake of contradiction, that <span class="math-container">$f$</span> is not bounded above. So for all <span class="math-container">$M \in \Bbb{R}$</span>, there is a <span class="math-container">$x \in X$</span> such that <span class="math-container">$f(x) &gt; M$</span>. So there exists a sequence <span class="math-container">$\{x_n\}$</span> in <span class="math-container">$X$</span> such that for all <span class="math-container">$n \in \Bbb{N}$</span>, we have <span class="math-container">$f(x_n) &gt; n$</span>. So <span class="math-container">$x_n \in f^{-1}([n,\infty))$</span> for all <span class="math-container">$n \in \Bbb{N}$</span>. Since <span class="math-container">$f^{-1}([n,\infty)) \subseteq X$</span> are closed for all <span class="math-container">$n \in \Bbb{N}$</span>, <span class="math-container">$f^{-1}([n,\infty))$</span> are compact for all <span class="math-container">$n \in \Bbb{N}$</span>. Further, since each <span class="math-container">$f^{-1}([n,\infty))$</span> is nonempty, and <span class="math-container">$f^{-1}([n + 1,\infty)) \subseteq f^{-1}([n,\infty))$</span> for all <span class="math-container">$n \in \Bbb{N}$</span>, <span class="math-container">$\bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span> is not empty, and it is also closed. </p> <p>I am trying to show now that <span class="math-container">$\{x_n\}$</span> has a convergent subsequence <span class="math-container">$\{x_{n_k}\}$</span> that converges to some <span class="math-container">$x \in \bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span>, and reach a contradiction since <span class="math-container">$$\bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty)) = f^{-1}(\bigcap\limits_{n = 1}^\infty [n,\infty))$$</span> is empty (since <span class="math-container">$\bigcap\limits_{n = 1}^\infty [n,\infty)$</span> is empty).</p> <p>My issue is, how do I know <span class="math-container">$x \in \bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span>? I know its in <span class="math-container">$X$</span> since it is compact, but I don't know if its in <span class="math-container">$x \in \bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span>. </p> <p>Help, please.</p>
José Carlos Santos
446,262
<p>You cannot know that, since it is not true. Suppose that <span class="math-container">$x\in\bigcap_{n=1}^\infty f^{-1}\bigl([n,\infty)\bigr)$</span>. Then <span class="math-container">$(\forall n\in\mathbb{N}):f(x)\geqslant n$</span>. But no real number is greater than or equal to <em>every</em> natural number.</p>
2,709,532
<p>(True/False) If $S$ is a non-empty subset of $\mathbb{N}$, then there exists an element $m \in S$ such that $m\ge k$ for all $k \in S$.</p> <p>So my reasoning is that the above statement is false. Since $S$ is a non-empty subset of $\mathbb{N}$, it may also be the case that $S=\mathbb{N}$, and so there may not be an $m \in S$ such that $m \ge k$ for all $k \in S$. I wanted to know if I'm on the right track, and if not, if someone could provide a hint. </p>
egreg
62,967
<p>Your counterexample is perfectly good. No need to look for complicated ones: there is no maximum natural number, and that's all you need for deciding that the statement is false.</p> <hr> <p>For your own delight, you can prove that the property of having a maximum characterizes nonempty <em>finite</em> subsets of $\mathbb{N}$.</p> <p>Indeed, if a nonempty set $S$ has a maximum $m$, it is contained in $\{0,1,\dots,m\}$, which is finite.</p> <p>If a nonempty set $S$ is finite, then it has a maximum element, by induction on the number of elements. It is true if the set has one element. Suppose it is true for nonempty sets with less than $n$ elements and suppose $S$ has $n$ elements ($n&gt;1$). Then pick $m_0\in S$; if it is the maximum, you're done; otherwise, $S'=\{x\in S:x&gt;m_0\}$ is nonempty and has a maximum $m$ by the induction hypothesis. Now let $x\in S$: if $x\le m_0$, then $x&lt;m$; if $x&gt;m_0$, then $x\le m$. Hence $m$ is the maximum of $S$.</p>
973,101
<p>I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.</p> <p>If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin</p> <p>$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$</p> <p>and calculate the point </p> <p>$$\mathbf{y}=(x_1,x_2,x_3)/d.$$</p> <p>It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.</p> <p>Suppose now we have an ellipsoid</p> <p>$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$</p> <p>How about generating three $N(0,1)$ variables as above, calculate</p> <p>$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</p> <p>and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?</p> <p>Any help greatly appreciated, thanks.</p> <p>PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.</p> <p>EDIT:</p> <p>Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as $$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos ^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin ^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi )\right)}$$</p>
elhuhdron
501,509
<p>What if you use the normal distribution method, as you propose, but use the respective radii as the variances? Then scale with d, as proposed. Empirically this does not look more bunched (at least to me) at &quot;cigar poles&quot; than a brute force selection from uniform distributed points in <span class="math-container">$\mathbb{R}^3$</span>.</p> <p>So, the method is generate random variables:</p> <p><span class="math-container">$$x_1 \sim N(0,a^2)$$</span> <span class="math-container">$$x_2 \sim N(0,b^2)$$</span> <span class="math-container">$$x_3 \sim N(0,c^2)$$</span></p> <p>Then the rest is as suggested:</p> <p><span class="math-container">$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</span></p> <p>and use points <span class="math-container">$\mathbf{y}=(x_1,x_2,x_3)/d$</span></p> <p>The proof of the original spherical selection is simply that normal distributions are radially symmetric? Does that not still apply with unequal variances?</p> <p>NOTE: This is different from the answer in the comment from @jibounet that proposes scaling the points that were generated on the surface of a sphere, because the terms in <span class="math-container">$d$</span> are different, i.e. for</p> <p><span class="math-container">$$x \sim N(0,n^2)$$</span> <span class="math-container">$$ \frac{x^2}{n^2} \neq N(0,1) $$</span></p>
3,930,373
<p>Hi so im working on a question about finding the <span class="math-container">$rank(A)$</span> and the <span class="math-container">$dim(Ker(A))$</span> of a 7x5 Matrix. Without being given an actual matrix to work from.</p> <p>I have been told that that the homogeneous equation <span class="math-container">$A\vec x=\vec0$</span> has general solution <span class="math-container">$\vec x=\lambda \vec v$</span> for some non zero <span class="math-container">$\vec v$</span> in <span class="math-container">$R^{5}$</span>.</p> <p>So my thinking so far is that I know for an <span class="math-container">$m*n$</span> matrix we know that:</p> <p><span class="math-container">$rk(A)+dimker(A)=n$</span> which must mean that <span class="math-container">$rk(A)+dimker(A)=5$</span></p> <p>but this is where I get stuck and dont know how to proceed.</p> <p>Any help is greately appreciated.</p> <p><a href="https://i.stack.imgur.com/2kLWC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2kLWC.png" alt="enter image description here" /></a> .</p> <p>This is the exact question for the person who asked.</p>
Figaro
829,712
<p>The best way to interpret this limit as a complex number. Which can be written as: <span class="math-container">$$\underset{x\to0}{lim}\hspace{0.3cm}x^2i+0$$</span> As always, you can separate both parts of the limit to get a more clear view of why the result is a real 0: <span class="math-container">$$=\underset{x\to0}{lim}\hspace{0.3cm}x^2i+\underset{x\to0}{lim}\hspace{0.3cm}0 $$</span> And when you finally evaluate both parts of the limits, you can actually be sure that the real part and the imaginary part are both reaching 0 the closer x is to 0 from left or right (Even if the real part domain is <span class="math-container">$\{0\}$</span> since the imaginary part domain is <span class="math-container">$[0,\infty)$</span> ).</p>
913,239
<p>Given a circle with known center $c$, known radius $r$ and perimeter point $x$: $$ (x - c_x)^2 + (y - c_y)^2 = r^2 $$ with a tangent line that also goes through a point $p$ lying outside the circle. How do I find the point $x$ at which the line touches the circle?</p> <p>Given that the tangent line is orthogonal to the vector $(x-c)$ and also that the vector $(x-p)$ lies on the tangent line we have $(x-c) \cdot (x-p) = 0$ which can be expanded to:</p> <p>$$ (x - c_x) (x - p_x) + (y - c_y) (y - p_y) = 0 $$</p> <p>Thus my question is: How do I find the point $x$?</p>
RE60K
67,609
<p>Writing Chord of Contact for the circle ( which you wrote via vector approach ), where X and Y are coordinates of external point and $\alpha,\beta$ are the points on the circle where the tangent touches: $$(\alpha - c_x) (\alpha - X) + (\beta - c_y) (\beta - Y) = 0$$ Also this cuts the circle at two different points which you want to find so the second equation is: $$(\alpha-c_x)^2+(\beta-c_y)^2=r^2$$</p> <hr> <p><strong><a href="http://www.wolframalpha.com/input/?i=%28%5Calpha-a%29%5E2%2B%28%5Cbeta-X%29%5E2%3Dr%5E2%2C+%28%5Calpha+-+a%29+%28%5Calpha+-+X%29+%2B+%28%5Cbeta+-+b%29+%28%5Cbeta+-+Y%29+%3D+0%2C+solve+for+%5Calpha+and+%5Cbeta" rel="nofollow">After using W|A we get these results here on this page</a></strong></p>
1,123,513
<p>The question is$$\text{Is }\mathbb{R}^2\text{ a subspace of }\mathbb{C}^2?$$My first thing to think about it now is $$\text{Is }\mathbb{R}^2\text{ a subset of }\mathbb{C}^2?$$ I think no because what is $\mathbb{R}^2$ it's like all those pairs $(0,1)$ right? and what is $\mathbb{C}^2$ it's like euh $((a,b),(c,d))$ (because a complex number is a pair of two numbers with multiplication defined as $(a,b)(c,d)=(ac-bd,ad+bc)$ and addition $(a,b)+(c,d)=(a+c,b+d)$) so they aren't the same$$$$another point is of course $\mathbb{R}^2$ is a vector space over $\mathbb{R}$ with specific operations, and $\mathbb{C}^2$ is a vector space over $\mathbb{C}$ with different operations, so my gut feeling tells me that they can't be the same;;;;;;;; in general if $V$ is a vector space over $D$ with operations $+,\times$ and $U$ is a vector space over $B$ with operations $\oplus,\otimes$ and $B\ne D$ can it be the case that $U$ is a subset of $V$? Over what?</p>
hmakholm left over Monica
14,366
<p>Usually we say that $\mathbb R$ is a subset of $\mathbb C$, because any real number, such as $0$ or $2$ or $\pi$ is also a member of $\mathbb C$. There are <a href="https://math.stackexchange.com/questions/876195/what-is-an-advatage-of-defining-mathbbc-as-a-set-containing-mathbbr">various schools of thought</a> about how to justify this formally, but that doesn't need to concern us for the purpose of this question -- just so long as we accept that, say, $\pi\in\mathbb C$ is a true statement, which is definitely the case in ordinary everyday mathematics.</p> <p>In this case it is indeed the case that $\mathbb R^2\subseteq \mathbb C^2$ because every element of the former is also in the latter.</p> <p>For example $(\sqrt 2,\pi)\in \mathbb R^2$ is also in $\mathbb C^2$ because $(\sqrt2,\pi)=(\sqrt2+0i,\pi+0i)\in\mathbb C^2$.</p> <p>So $\mathbb R^2$ is certainly a <em>subset</em> of $\mathbb C^2$. Whether it is also a <em>subspace</em> depends on which kind of vector space we consider $\mathbb C^2$. If we say $\mathbb C^2$ is a <em>complex</em> vector space, then $\mathbb R^2$ is not a subspace because it fails to be closed under scalar multiplication. But if $\mathbb C^2$ is a <em>real</em> vector space (with the obvious operations), then $\mathbb R^2$ <em>is</em> a subspace.</p> <p>So the real answer is that the question is ambiguous.</p>
258,893
<p>I have a set of three moderately simple simultaneous equations that I'd like to simplify and eliminate a set of variables for (this is a simple example of a more general class of problem - but I'd like to get the simple example working before starting on the bigger ones). Asking Mathematica to <code>Eliminate</code> two of the three variables I'd like to remove and then simplifying the result gets me to the answer fairly quickly: however asking Mathematica to <code>Eliminate</code> all three of the variables at once hangs. Are there any tricks I can use to help <code>Eliminate</code> with this task, or generalisations of it?</p> <p>My example has three matrices (adjoint representation of so(3) for the interested)</p> <pre><code>b1 = {{0, 0, 0}, {0, 0, 1}, {0, -1, 0}}; b2 = {{0, 0, -1}, {0, 0, 0}, {1, 0, 0}}; b3 = {{0, 1, 0}, {-1, 0, 0}, {0, 0, 0}}; </code></pre> <p>and then the equations are</p> <pre><code>{xt1, xt2, xt3} == {x1, x2, x3}.MatrixExp[t1 b1].MatrixExp[t2 b2].MatrixExp[t3 b3] </code></pre> <p>where I would like to eliminate the <code>t1</code>, <code>t2</code>, and <code>t3</code> variables.</p> <p>Running this to eliminate <code>t1</code> and <code>t2</code> gives</p> <pre><code>Timing@Simplify@Eliminate[{xt1, xt2, xt3} == {x1, x2, x3}.MatrixExp[t1 b1].MatrixExp[t2 b2].MatrixExp[t3 b3], {t1, t2}] </code></pre> <blockquote> <p>{40.375, x1^2 + x2^2 + x3^2 == xt1^2 + xt2^2 + xt3^2}</p> </blockquote> <p>which is the correct answer. However,</p> <pre><code>Timing@Eliminate[{xt1, xt2, xt3} == {x1, x2, x3}.MatrixExp[t1 b1].MatrixExp[t2 b2].MatrixExp[t3 b3], {t1, t2, t3}] </code></pre> <p>runs forever (at least four hours on my laptop). How can I make <code>Eliminate</code>s life easier, or are there other tools I could try to solve this system of equations? (Extra information if helpful: the functional form of the reduced solution(s) to members of this class of equations will always have the same functional form in the <code>xt</code> variables and the <code>x</code> variables, although the actual functional form is unknown - I'm not sure if this can be leveraged within Mathematica to help?)</p>
Michael E2
4,999
<p>Setup:</p> <pre><code>b1 = {{0, 0, 0}, {0, 0, 1}, {0, -1, 0}}; b2 = {{0, 0, -1}, {0, 0, 0}, {1, 0, 0}}; b3 = {{0, 1, 0}, {-1, 0, 0}, {0, 0, 0}}; sys = {xt1, xt2, xt3} == {x1, x2, x3} . MatrixExp[t1 b1] . MatrixExp[t2 b2] . MatrixExp[t3 b3]; </code></pre> <hr /> <p>Eliminating three variables from the three equations sequentially works quickly:</p> <pre><code>Quiet[ Eliminate[ Eliminate[ Eliminate[sys, {t3}], {t2}], {t1}], Eliminate::ifun] (* -x2^2 - x3^2 + xt1^2 + xt2^2 + xt3^2 == x1^2 *) </code></pre> <hr /> <p>Eliminating six variables from six equations (@Akku14's rationalization) works, too, but it takes a few seconds:</p> <pre><code>Eliminate[{sys, Sin[t1]^2 + Cos[t1]^2 == 1, Sin[t2]^2 + Cos[t2]^2 == 1, Sin[t3]^2 + Cos[t3]^2 == 1}, {Sin[t3], Cos[t3], Sin[t2], Cos[t2], Sin[t1], Cos[t1]}] (* xt3^2 == x1^2 + x2^2 + x3^2 - xt1^2 - xt2^2 *) </code></pre>
3,276,958
<p>Given two points <span class="math-container">$P_1(x_1,y_1)$</span> and <span class="math-container">$P_2(x_2,y_2)$</span> with <span class="math-container">$y_1&gt;0$</span> and <span class="math-container">$y_2&gt;0$</span> I need to find the parameters <span class="math-container">$a$</span> and <span class="math-container">$b$</span> of an exponential function having the form <span class="math-container">$y=a*b^x$</span>.</p> <p>How I can solve this problem in a "generic" way getting the formulas to find <span class="math-container">$a$</span> and <span class="math-container">$b$</span> from known <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span>? I tried to find the solution by myself for two hours but I keep getting the same, wrong formulas.</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$y_1=ab^{x_1}$</span> and <span class="math-container">$y_2=ab^{x_2}$</span>, then<span class="math-container">$$\frac{y_2}{y_1}=\frac{ab^{x_2}}{ab^{x_1}}=b^{x_2-x_1}.$$</span>So, take <span class="math-container">$b=\left(\dfrac{y_2}{y_1}\right)^{1/(x_2-x_1)}$</span>. And now <span class="math-container">$a=\dfrac{y_1}{b^{x_1}}$</span>.</p>
2,369,133
<p>I just found out that if you want to get 1 with the fraction: $$\frac{5}{2}$$ Then you multiply it with: $$ \frac{2}{5} $$ Does anyone have a good way to think about this? </p>
Dando18
274,085
<p>This is called <a href="https://en.wikipedia.org/wiki/Multiplicative_inverse" rel="nofollow noreferrer">The Inverse Property of Multiplication</a>. Take $x$, then </p> <p>$$x \cdot \frac 1 x = 1$$</p> <p>In your case of $x=\frac 5 2$,</p> <p>$$ \frac 5 2 \cdot \frac{1}{\frac 5 2} = \frac 5 2 \cdot \frac 2 5 = 1 $$</p>
392,441
<p>Assume that $f$ is infinitely differentiable. Let $\delta$ be the (Dirac) delta functional.</p> <p>I know that $f\delta = f(0)\delta$, but I'm not sure how to derive the equation $f\delta′=f(0)\delta′−f′(0)\delta$.</p>
user43286
116,305
<p>you can also expand $f$ into Taylor series </p> <p>$f(t)=f(0)+f'(0)t + \frac{f''(0)}{2!}t^2+\cdots$ </p> <p>and consider </p> <p>$(t \delta(t))' = 0 = \delta(t) + t\delta'(t)$</p> <p>So we have </p> <p>$t\delta'(t) =- \delta(t)$</p> <p>all high order terms involving $t^n \delta'(t)=0, n\ge2$. </p> <p>Finally</p> <p>$f\delta′=f(0)\delta′−f′(0)\delta$.</p>
1,899,740
<p>Let $(\Omega,\Sigma,\nu)$ be a measurable space with $\nu$ a positive $\sigma$-additive measure on $\Sigma$ and le $f:\Omega\to \mathbb{R}$ be a function such that, $$ \int_{\Omega}f(x)d\nu(x)=0 $$ Does this imply that $f=0\quad \nu-\ a.e.$ ?</p> <p>Any hint will be appreciated :) thank you for your time !</p>
Hasan Saad
62,977
<p>This result is true if and only if $f\geq0$ almost everywhere. Ant gave an example of where this result is fault when this condition is not true.</p> <p>As for the proof when $f\geq0$, here goes:</p> <p>$v(\{x\in\Omega|f(x)&gt;0\})=\sum_2^\infty v(\{x\in\Omega|\frac{1}{n}\leq f(x)\leq\frac{1}{n-1}\})+v(\{x\in\Omega|f(x)&gt;1\})$.</p> <p>Now, assume any of the summands had measure bigger than $0$, then it is obvious that the integral would have positive measure. (Since if some set $X$ had measure $\alpha$ and $f\geq \gamma$ over $X$ then $\int_X f\geq \gamma\alpha$)</p> <p>Thus, $f$ is $0$ almost everywhere.</p>
43,646
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/43531/havel-hakimi-theorem">Havel-Hakimi Theorem</a> </p> </blockquote> <p>Hi. I'm a beginner at graph theory, and I recently came across the Havel-Hakimi Theorem which is used to determine whether a sequence of integers is graphical. I am using Chartrand and Zhang's <em>Introduction to Graph Theory</em>, but I feel that the proof they provide is lacking. I am wondering whether anyone is aware of a proof for this theorem or where I can find one, preferably an easier one.</p> <p>Thanks.</p>
Joseph Malkevitch
1,369
<p>Look at pages 44 and page 45 of the second edition of Doug West's book Introduction to Graph Theory for what I think is a clear account accompanied by a worked out example. </p>
732,132
<blockquote> <p>Suppose $P(x)$ is a polynomial of degree $n \geq 1$ such that $\int_{0}^{1}x^{k}P(x)\,dx = 0$ for $k = 1, 2, \ldots, n$. Show that $$\int_{0}^{1}\{P(x)\}^{2}\,dx = (n + 1)^{2}\left(\int_{0}^{1}P(x)\,dx\right)^{2}$$</p> </blockquote> <p>If we assume that $P(x) = a_{0}x^{n} + \cdots + a_{n - 1}x + a_{n}$ then we can easily see that $$\int_{0}^{1}\{P(x)\}^{2}\,dx = a_{n}\int_{0}^{1}P(x)\,dx$$ and therefore to solve the given problem we need to show that $$\int_{0}^{1}P(x)\,dx = \frac{a_{n}}{(n + 1)^{2}}$$ Direct integration of the polynomial gives the expression $$\frac{a_{0}}{n + 1} + \frac{a_{1}}{n} + \cdots + \frac{a_{n - 1}}{2} + a_{n}$$ and simplifying this to $a_{n}/(n + 1)^{2}$ does not seem possible. I think there is some nice "integration by parts" trick which will give away the solution, but I am not able to think of it.</p>
r9m
129,017
<p>Since, $\displaystyle \int_0^1 x^kP(x)\,dx=\frac{a_0}{n+k+1}+\frac{a_1}{n+k}+\ldots+\frac{a_n}{k+1}=0$, for each $k=1,2,\ldots,n$</p> <p>Then $f(x)=\dfrac{a_0}{n+x+1}+\dfrac{a_1}{n+x}+\ldots+\dfrac{a_n}{x+1}=\dfrac{Q(x)}{(n+x+1)\ldots(x+1)}$ (where, Q is a polynomial of degree atmost n), has $n$ zeros $x=1,2,\ldots,n$.</p> <p>Thus, $Q(x)=c(x-1)(x-2)\ldots(x-n)$, for some constant $c$.</p> <p>Also, $(x+1)f(x)=\dfrac{a_0(x+1)}{n+x+1}+\dfrac{a_1(x+1)}{n+x}+\ldots+a_n=\dfrac{Q(x)}{(n+x+1)\ldots(x+2)}$</p> <p>Setting, $x=-1$ in the above expression $a_n = \dfrac{Q(-1)}{n!}=\dfrac{c(-1)^n(n+1)!}{n!}=c(-1)^n(n+1)$</p> <p>and, setting $x=0$ we have $\displaystyle \int_0^1 P(x)\,dx = \dfrac{a_0}{n+1}+\ldots+a_n=\dfrac{Q(0)}{(n+1)!}=\dfrac{c(-1)^n}{n+1}$.</p> <p>Thus $a_n=\displaystyle (n+1)^2\int_0^1 P(x)\,dx$, implying $\displaystyle \int_{0}^{1}\{P(x)\}^{2}\,dx = (n + 1)^{2}\left(\int_{0}^{1}P(x)\,dx\right)^{2}$.</p>
261,605
<p>Applying the <a href="https://en.wikipedia.org/wiki/M%C3%BCntz%E2%80%93Sz%C3%A1sz_theorem" rel="nofollow noreferrer">Müntz–Szász theorem</a> on $[0,1]$ repeatedly, we can represent $$ x= \sum_{n\geq 2} c_n x^n $$ as a uniformly convergent series (<strong>edit:</strong> only over some subsequence, see edits below) on $[0,1]$ of higher powers $x^n$ for $n\geq 2$. What can one say about the coefficients? Is there an explicit choice of $c_n$?</p> <p><strong>Edit:</strong> Comments below suggest that this is not possible. What is wrong with the following argument? Take $\epsilon&gt;0$ and approximate $x$ by a finite combination of higher powers and a constant uniformly with an error $\epsilon/2.$ Plugging $x=0$ we see that the constant is smaller than $\epsilon/2$ so dropping it we get an approximation up to $\epsilon$ by a finite sum $\sum_{n=2}^{N_1} c_n x^n.$ Next, consider $x-\sum_{n=2}^{N_1} c_n x^n$ and approximate it by a linear combination $\sum_{n=N_1+1}^{N_2} c_n x^n$ up to an error $\epsilon/2.$ This gives us an $\epsilon/2$-approximation $\sum_{n=1}^{N_2}c_n x^n.$ Continue this construction repeatedly.</p> <p><strong>Edit II:</strong> Theorem holds for $a=0$ if we include constants, but Robert Israel's comment below contained the main point: the series only converges over some subsequence $(N_k)_{k\geq 1}$ as in the above construction. Let me rephrase the question accordingly:</p> <p>Is there anything interesting one can say about $c_n$? Can one choose the subsequence and $c_n$ in a way that $(c_n)\in\ell^p$ for some $p$, or uniformly bounded?</p>
Kevin Buzzard
1,384
<p>[Sorry -- too long for a comment]. It has already been pointed out that if you want 0 in your domain then you'll have to use constant functions too. But even then the theorem does not say what you seem to think it says: it only says that we can approximate $f(x)=x$ arbitrarily well by such a power series -- and as the approximation gets better and better the terms might jump around like crazy. In particular the coefficients are not really well-defined. As one way of seeing this, take one expansion which is very close to $x$ and let's assume the coefficient of $x^7$ is non-zero. Then remove 7 from your set of allowable powers of $x$ and the theorem again says that one can approximate $x$ arbitrarily well; this time the coefficient of $x^7$ is forced to be zero.</p>
100,957
<p>I have this picture of small particles in a polymer film. I want to count how many particles in the figure, so that I can have a rough estimation of the particle density. But the image quality is poor, I had a hard time to do it.</p> <p><a href="https://i.stack.imgur.com/ez50R.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ez50R.jpg" alt="particle in film - grayscale"></a>, <a href="https://i.stack.imgur.com/0OJd7.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/0OJd7.jpg" alt="particle in film - color"></a></p> <p>I have tried several ways to do it, but failed. below is the code. The first method I tried is:</p> <pre><code>`SetDirectory["C:\\Users\\mayao\\documents"] image = Import["Picture3.jpg"]; imag2 = Binarize[image, {0.0, 0.8}]; cells = SelectComponents[DeleteBorderComponents[imag2], "Count", -400]; circles = ComponentMeasurements[ImageMultiply[image,cells],"Centroid", "EquivalentDiskRadius"}][[All, 2]]; Show[image, Graphics[{Red, Thick, Circle @@ # &amp; /@ circles}]] </code></pre> <p>Here is what I got: <a href="https://i.stack.imgur.com/f7u8C.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/f7u8C.jpg" alt="enter image description here"></a></p> <p>So it does not count all the particle. Plus, it sometimes take several particle as one.</p> <p>I read another method from a thread here, the code is:</p> <pre><code>obl[transit_Image] := (SelectComponents[ MorphologicalComponents[ DeleteSmallComponents@ ChanVeseBinarize[#, "TargetColor" -&gt; Black], Method -&gt; "ConvexHull"], {"Count", "SemiAxes"}, Abs[Times @@ #2 Pi - #1] &lt; #1/100 &amp;]) &amp;@ transit; GraphicsGrid[{#, obl@# // Colorize, ImageMultiply[#, Image@Unitize@ obl@#]} &amp; /@ (Import /@ ("C:\\Users\\mayao\\documents\\" &lt;&gt; # \ &amp; /@ {"Picture1.jpg", "Picture2.jpg", "Picture3.jpg", "Picture1.jpg"}))] </code></pre> <p>But it does not recognize the single particles:</p> <p><a href="https://i.stack.imgur.com/zav3Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zav3Y.png" alt="enter image description here"></a></p> <p>Is there any other method to do this task? Thanks a lot for any suggestions.</p>
yode
21,532
<p>get the binarize image</p> <pre><code> img = Import["http://i.stack.imgur.com/0OJd7.jpg"]; binimg = LocalAdaptiveBinarize[img, 25]; </code></pre> <p><a href="https://i.stack.imgur.com/Fbvri.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Fbvri.png" alt="enter image description here"></a></p> <p>The effect like this:</p> <pre><code> big = ImageDemosaic[binimg // ColorConvert[#, "Grayscale"] &amp;, "RGGB"] // MinDetect // SelectComponents[#, "Count", # &gt; 100 &amp;] &amp;; (array = WatershedComponents[GradientFilter[big, 2], DistanceTransform[big] // MaxDetect]) // Colorize </code></pre> <p><a href="https://i.stack.imgur.com/fbzjI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fbzjI.png" alt="enter image description here"></a></p> <p>Then your number is</p> <pre><code>array // Max </code></pre> <blockquote> <p>1650</p> </blockquote> <p>Update===========================================================</p> <p>Use the <code>Closing</code> to optimize the binarize image.</p> <pre><code>binimg = Closing[LocalAdaptiveBinarize[img, 25], 3] </code></pre> <p><a href="https://i.stack.imgur.com/xwKBo.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/xwKBo.jpg" alt="enter image description here"></a></p> <p>Then we get the array and verify the effect.</p> <pre><code>(array = WatershedComponents[GradientFilter[binimg, 2], DistanceTransform[binimg // ColorNegate] // MaxDetect]) // Colorize </code></pre> <p><a href="https://i.stack.imgur.com/xQGvS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xQGvS.png" alt="enter image description here"></a></p> <p>Or you can like this:</p> <pre><code>Show[img, Graphics[{Red, Point[(array /. 1164 -&gt; 0 // ComponentMeasurements[#, "Centroid"] &amp;)[[All, 2]]]}]] </code></pre> <p><a href="https://i.stack.imgur.com/6hoCh.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/6hoCh.jpg" alt="enter image description here"></a></p> <p>So the number of your component is:</p> <pre><code>array // Max </code></pre> <blockquote> <p>1340</p> </blockquote>
990,467
<p>I am having some trouble understanding these two questions. Any help is appreciated. Scanned questions are included at the end.</p> <p>6) We are given the function $ f(x) =\frac{1 - 2x} {2x^2 - 3x - 2} $</p> <p>6 a) Find the equation of the vertical asymptotes. Explain how.</p> <p>For the above question how did they first get the equation $ x =( 3 \pm √25 ) / 4, $ </p> <p>and then get x = 2 and x = -1/2 out of it?</p> <p>6 b) Find the equation of the horizontal asymptotes. Use a limit.</p> <p>For this question I understand that when the degree of the numerator is less than the degree of the denominator it results in a horizontal asymptote. Thus here we get y = 0. Right? But I would still like to know if it is the same procedure they used in the answer sheet to get the answer 0/2.</p> <p><img src="https://i.stack.imgur.com/hKhb4.jpg" alt="enter image description here"></p>
Jasser
170,011
<p>It would be enough to find the intersection of two graphs $2^{\cos x}$ and $\sin x$ over interval $[0,2\pi ]$ and since the graph again repeats after this the other solutions cvan be found by just adding $2k\pi$ where $k=0,1,2...$ to the intersection points. Hence plotting the graph once will help you a lot.</p>
1,169,591
<p>For all finite vector spaces and all linear transformations to/from those spaces, <strong>how can you prove/show from the definition of a linear transformation</strong> that all linear transformations can be calculated using a matrix.</p>
pjs36
120,540
<p>Each linear transformation $T: V \to W$ is completely determined by what it does to a set of basis vectors of $V$. </p> <p>That is, suppose we have a linear transformation $T: V \to W$ where $V$ and $W$ are finite-dimensional vector spaces with bases $\mathcal{B}_V = \{e_1, e_2, \ldots, e_n\}$ and $\mathcal{B}_W = \{f_1, f_2, \ldots, f_m\}$ respectively.</p> <p>Since $T$ is linear, if we know $T(e_i)$ for each $e_i \in \mathcal{B}$, then we know $T(c_1e_1 + c_2e_2 + \ldots c_ne_n)$. Thus, we just need to know what $T(e_i)$ is, for each $i \in \{1, 2, \ldots, n\}$.</p> <p>Of course, $T(e_i)$, as a vector of $W$, can be uniquely represented as a linear combination of the basis vectors in $\mathcal{B}_W$. That is, for each basis vector $e_i \in \mathcal{B}_V$, there exist some $m$ scalars $a_{i, j}$ scalars such that</p> <p>$$T(e_i) = a_{i,1}f_1 + a_{i, 2}f_2 + \ldots a_{i, m}f_m.$$</p> <p>This is the idea behind understanding why an $m \times n$ table of numbers will determine any linear map between finite dimensional vector spaces. What I've written is certainly not water-tight, but hopefully it gives you some idea of why it works.</p>
1,169,591
<p>For all finite vector spaces and all linear transformations to/from those spaces, <strong>how can you prove/show from the definition of a linear transformation</strong> that all linear transformations can be calculated using a matrix.</p>
Disintegrating By Parts
112,478
<p>If you have a linear transform $L : X \rightarrow Y$, where $X$ and $Y$ are finite dimensional linear spaces, then you choose a basis $\{ x_{i} \}_{i=1}^{n}$ of $X$ and a basis $\{ y_{j} \}_{j=1}^{m}$ of $Y$, and write $$ Lx_{n} = \alpha_{1,n}y_{1}+\alpha_{2,n}y_{2}+\cdots+\alpha_{m,n}y_{m}. $$ The constants $\alpha_{n,m}$ are unique. Every $x \in X$ can be written uniquely as $$ x = \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_n x_n. $$ By linearity $$ \begin{align} Lx &amp; = \beta_1 Lx_1 + \beta_2 Lx_2 + \cdots \beta_n Lx_n \\ \\ &amp; = \beta_1 (\alpha_{1,1} y_1 + \alpha_{2,1}y_2 + \cdots + \alpha_{m,1}y_m) \\ &amp; + \beta_2 (\alpha_{1,2} y_1 + \alpha_{2,2}y_2 + \cdots + \alpha_{m,2}y_m) \\ &amp; + \cdots + \\ &amp; + \beta_n (\alpha_{1,n} y_1 + \alpha_{2,n}y_2 + \cdots + \alpha_{m,n}y_m) \\ \\ &amp; = (\alpha_{1,1}\beta_1+\alpha_{1,2}\beta_2+\cdots+\alpha_{1,n}\beta_{n})y_1 \\ &amp; + (\alpha_{2,1}\beta_1+\alpha_{2,2}\beta_2+\cdots+\alpha_{2,n}\beta_{n})y_2 \\ &amp; + \cdots + \\ &amp; + (\alpha_{m,1}\beta_1+\alpha_{m,2}\beta_2+\cdots+\alpha_{m,n}\beta_{n})y_n \end{align} $$ So, the action of $L$ is uniquely determined by the matrix $[\alpha_{i,j}]$ as follows: Start with $x \in X$, write $x = \sum_{i=1}^{n}\beta_{i}x_{i}$, then perform matrix multiply $[\alpha_{j,i}][\beta_{i}]$ with gives $[\gamma_{j}]$, and you then reconstruct $Lx = \gamma_1 y_1+\gamma_2 y_2 + \cdots \gamma_m y_m$. Therefore, $L$ is completely determined by the $n\times m$ matrix $[\alpha_{i,j}]$ as defined above. Conversely, every such matrix determines a linear $L$ whose matrix representation is the given matrix.</p>
111,343
<p>Hallo,</p> <p>I have the following question: Let $M$ smooth analytic manifold of dimension 4n. Assume furthermore that $M$ admits two foliations $A$, $B$, both with leaves of dimension 2n such that the leaves of $A$ are transvers to the leaves of $B$ at each point. Also the leaves of $A$ are $n$-dim complex manifolds with complex structure $J_{\alpha}$ (for a leaf $\alpha \in A$) which vary smoothly if $\alpha$ vary in $A$. The leaves of $B$ are $n-$dim complex manifolds with complex structure $J_{\beta}$ (for a leaf $\beta \in B$) which vary smoothly if $\beta$ vary in $B$. My question is now: Is it possible to define a complex structure on te manifold $M$ by: for $x \in M$ there exists exactly one leaf $\alpha \in A$ that contains $x$ and one leaf $\beta \in B$ that contain $x$ and set for the complex structure $J := J_{\alpha, x} + J_{\beta, x}$. This is an almost complex structure. Is this structure integrable, if $J_{\alpha}$ and $J_{\beta}$ are integrable on each leaf? If not what assumption do I need in order to make it integrable? I hope for a lot of answers. Thanks in advance.</p> <p>Marin </p>
Robert Bryant
13,972
<p>The answer is 'no, in general': For example, if $(M^4,J)$ is <em>any</em> real-analytic almost complex $4$-manifold, one can easily construct (locally, in a neighborhood of any point of $M$) a pair $(A,B)$ of real-analytic, transverse foliations by pseudo-holomorphic curves, and these will satisfy your conditions. However, when you do your construction, you'll get back the original $J$, which need not be integrable.</p> <p>To get integrability of the $J$ you construct, you'll need to suppose the vanishing of its Nijnhuis tensor, part of which has to vanish already because of the integrability of the restriction of $J$ to the leaves of the two foliations, but that's not enough to force the entire Nijnhuis tensor to vanish, as the above example shows.</p>
228,105
<p>I want the last number after the &quot;/&quot;. How can I do that?</p> <pre><code>&quot;http://arxiv.org/abs/math/0208009v1&quot; </code></pre> <p>I just want <strong>0208009v1</strong></p> <p>I want to import multiple links from the following:</p> <pre><code>{{&quot;http://arxiv.org/abs/math/0208009v1&quot;}, \ {&quot;http://arxiv.org/abs/0905.0227v1&quot;}, \ {&quot;http://arxiv.org/abs/0907.5143v2&quot;}, \ {&quot;http://arxiv.org/abs/math/0509348v1&quot;}, \ {&quot;http://arxiv.org/abs/math/0608711v2&quot;}, \ {&quot;http://arxiv.org/abs/math-ph/0002018v2&quot;}} </code></pre> <p>I just want the last number in the following links. How would that work?</p>
kglr
125
<pre><code>list = {{&quot;http://arxiv.org/abs/math/0208009v1&quot;}, {&quot;http://arxiv.org/abs/0905.0227v1&quot;}, {&quot;http://arxiv.org/abs/0907.5143v2&quot;}, {&quot;http://arxiv.org/abs/math/0509348v1&quot;}, {&quot;http://arxiv.org/abs/math/0608711v2&quot;}, {&quot;http://arxiv.org/abs/math-ph/0002018v2&quot;}}; </code></pre> <p>In addition to <a href="https://reference.wolfram.com/language/ref/StringSplit.html" rel="nofollow noreferrer"><code>StringSplit</code></a> suggested by J.M. in comments,</p> <pre><code>StringSplit[Flatten@list, &quot;/&quot;][[All, -1]] </code></pre> <p>you can also use</p> <h3><a href="https://reference.wolfram.com/language/ref/StringTrim.html" rel="nofollow noreferrer"><code>StringTrim</code></a></h3> <pre><code>StringTrim[Flatten[list], StartOfString ~~ ___ ~~ &quot;/&quot;] </code></pre> <h3><a href="https://reference.wolfram.com/language/ref/StringReplace.html" rel="nofollow noreferrer"><code>StringReplace</code></a></h3> <pre><code>StringReplace[Flatten[list], StartOfString ~~ ___ ~~ &quot;/&quot; -&gt; &quot;&quot;] </code></pre> <h3><a href="https://reference.wolfram.com/language/ref/StringCases.html" rel="nofollow noreferrer"><code>StringCases</code></a></h3> <pre><code>Flatten @ StringCases[Flatten@list, &quot;/&quot; ~~ a : Except[&quot;/&quot;] .. ~~ EndOfString :&gt; a] </code></pre> <h3><a href="https://reference.wolfram.com/language/ref/StringDrop.html" rel="nofollow noreferrer"><code>StringDrop</code></a> + <a href="https://reference.wolfram.com/language/ref/StringPosition.html" rel="nofollow noreferrer"><code>StringPosition</code></a>:</h3> <pre><code>MapThread[StringDrop, {Flatten@list, Max /@ StringPosition[Flatten@list, &quot;/&quot;]}] </code></pre> <h3><a href="https://reference.wolfram.com/language/ref/FileNameSplit.html" rel="nofollow noreferrer"><code>FileNameSplit</code></a></h3> <pre><code>Last /@ FileNameSplit /@ Flatten[list] </code></pre> <p>to get</p> <blockquote> <pre><code>{&quot;0208009v1&quot;, &quot;0905.0227v1&quot;, &quot;0907.5143v2&quot;, &quot;0509348v1&quot;, &quot;0608711v2&quot;, &quot;0002018v2&quot;} </code></pre> </blockquote>
2,925,450
<p>I am trying to solve the following problem: <span class="math-container">$|4-x| \leq |x|-2$</span>. I am trying to do it algebraically, but I'm getting a solution to the problem that makes no sense. I fail to see the error in my reasoning though. I hope to get an explanation where I went wrong.</p> <p><span class="math-container">$|4-x| \leq |x|-2$</span></p> <p><span class="math-container">$|4-x|-|x| \leq -2$</span></p> <p>If <span class="math-container">$4-x$</span> and <span class="math-container">$x$</span> are both negative, then for them to be equal to <span class="math-container">$2$</span>, we need to multiply both expressions by <span class="math-container">$-1$</span>.</p> <p><span class="math-container">$-4+x+x \leq -2$</span></p> <p><span class="math-container">$-4+2x \leq -2$</span></p> <p><span class="math-container">$2x \leq 2$</span></p> <p><span class="math-container">$x \leq 1$</span></p> <p>But if you sub in any <span class="math-container">$x$</span> less than or equal to <span class="math-container">$1$</span>, the inequality doesn't work! Can you please explain where in my logic, where in the steps, have I gone wrong? Thank you!</p>
MathIsNice1729
274,536
<p>For both $x$ and $4-x$ to be negative, $x&lt;0$ and $x&gt;4$, which is impossible.</p> <p>As for the solution, the right side has to be positive, hence $| x | \geq 2$. Now, we solve it in $3$ parts.</p> <p>In $( -\infty,-2 ]$, we have, $4+|x| \leq |x|-2$, which is not true.</p> <p>In $(4,\infty )$, we have, $x-4 \leq x-2$, which is true.</p> <p>In $[2,4]$, we have, $4-x \leq x-2 \implies x \geq 3$.</p> <p>Hence, the solution is, $x \in [3, \infty )$.</p>
1,599,502
<p>Solve for $x$ in exact value:</p> <p>$\\3^{2x}-3^{x+2}+8=0$</p> <p>I have tried substituting $3^x$ $=a$ but I didn't get anywhere.</p> <p>$\\a^2-a^{1+\frac{2}{x}}+8=0$</p>
Qiaochu Yuan
232
<p>"Unique irreducible module" means that there is a unique <em>isomorphism class</em> of irreducible modules; that is, every two irreducible modules are isomorphic. $\mathfrak{sl}_2$ does not have a unique irreducible module, since it has countably many nonisomorphic irreducible modules. </p>
4,645,582
<h2><strong>Problem</strong></h2> <p>Let <span class="math-container">$f$</span> be a continuous function, <span class="math-container">$f:[0,1]\to\mathbb{R}$</span> with <span class="math-container">$\int_{0}^1 (2x-1)f(x) dx = 0$</span>.</p> <p>Prove that there exists a point c between <span class="math-container">$(0, 1)$</span> such that <span class="math-container">$\int_{0}^c (x-c)(f(x)-f(c)) dx = 0$</span>.</p> <p>This problem was given at a regional competition in Romania for <span class="math-container">$12$</span>th grade students.</p> <h2><strong>Attempt</strong></h2> <p>From Rolle's Theorem we have that there exists q such that <span class="math-container">$f(q) = (2q - 1)f(q) = 0$</span>, where we have <span class="math-container">$f(q)=0$</span>.</p> <p>My other attempt was: fix <span class="math-container">$p$</span> to <span class="math-container">$0.5$</span> and separate according to the Mean Value Theorem for Integrals <span class="math-container">$\int_{0}^1 (2x-1)f(x) dx = 0$</span> into <span class="math-container">$-f(c1) * \int_{0}^p (1-2x) dx$</span> <span class="math-container">$+ $</span> <span class="math-container">$f(c2) * \int_{p}^1 (2x-1) dx = 0$</span> where we get that <span class="math-container">$f(c1)=f(c2)$</span></p>
Morărescu Mihnea
1,147,641
<p>First, let's take a look at our conclusion:</p> <p><span class="math-container">$\int_0^c(x-c)(f(x)-f(c))dx = 0 \Leftrightarrow \int_0^cxf(x)dx - \int_0^cxf(c)dx - \int_0^c cf(x)dx + \int_0^c cf(c)dx = 0 \Leftrightarrow \int_0^cxf(x)dx - f(c)\int_0^cxdx - c\int_0^cf(x)dx + c^2f(c) = 0 \Leftrightarrow \int_0^cxf(x)dx - c\int_0^cf(x)dx + \frac{c^2f(c)}{2} = 0$</span></p> <p>This is essentially what we need to prove. Therefore, consider <span class="math-container">$F : [0, 1] \rightarrow \mathbb{R}, F(x) = \int_0^x f(t)dt$</span>. One can immediately see that <span class="math-container">$F$</span> is differentiable, and that <span class="math-container">$F'(x) = f(x), \forall x \in [0, 1]$</span>.</p> <p><span class="math-container">$\int_0^x tf(t)dt = \int_0^xtF'(t)dt = xF(x) - \int_0^xF(t)dt$</span></p> <p>At this step, look at what we need to prove. Subtract <span class="math-container">$xF(x)$</span> and add <span class="math-container">$\frac{x^2f(x)}{2}$</span>. Hence, we obtain:</p> <p><span class="math-container">$\int_0^xtf(t)dt - xF(x) + \frac{x^2f(x)}{2} = \frac{x^2f(x)}{2} - \int_0^xF(t)dt$</span></p> <p>One can easily see that the left side of this equality is in fact equivalent to what we have to prove. Furthermore, proving the existence of <span class="math-container">$c \in (0, 1)$</span> for which the conclusion holds is in fact equivalent to proving that such a <span class="math-container">$c$</span> exists so that <span class="math-container">$\frac{c^2f(c)}{2} - \int_0^cF(t)dt = 0$</span>.</p> <p>At this step, consider <span class="math-container">$G, H : [0, 1] \rightarrow \mathbb{R}, G(x) = \int_0^xF(t)dt, H(x) = \frac{x^2f(x)}{2} - \int_0^xF(t)dt = \frac{x^2}{2}G''(x) - G(x)$</span>. Note that <span class="math-container">$G$</span> is twice differentiable on <span class="math-container">$[0, 1]$</span>. At this point in the proof, we need to prove that there exists a <span class="math-container">$c \in (0, 1)$</span> such that <span class="math-container">$H(c) = 0$</span>.</p> <p>Consider the function <span class="math-container">$g : [0, 1] \rightarrow \mathbb{R}, g(x) = \frac{x^2}{2}G'(x) - xG(x)$</span>. The new defined <span class="math-container">$g$</span> is differentiable on <span class="math-container">$[0, 1]$</span>, but what's more interesting is that <span class="math-container">$g'(x) = H(x), \forall x \in (0, 1)$</span>. Now, <span class="math-container">$g(0) = 0$</span> for obvious reasons. We are left to prove that <span class="math-container">$g(1) = 0$</span>.</p> <p><span class="math-container">$g(1) = \frac{G'(1)}{2} - G(1)$</span>. From its definition, <span class="math-container">$G'(x) = F(x)$</span>, therefore <span class="math-container">$G'(1) = F(1)$</span>.</p> <p>Recall the hypothesis: <span class="math-container">$\int_0^1(2x-1)f(x)dx = 0 \Leftrightarrow 2\int_0^1xf(x)dx = \int_0^1f(x)dx \Leftrightarrow \int_0^1xf(x)dx = \frac{1}{2}\int_0^1f(x)dx$</span></p> <p>Therefore, <span class="math-container">$G(1) = \int_0^1F(t)dt = \int_0^1t'F(t)dt = F(1) - \int_0^1tf(t)dt = F(1) - \frac{1}{2}\int_0^1f(t)dt = F(1) - \frac{F(1)}{2} = \frac{F(1)}{2}$</span></p> <p>The last equality comes directly from making use of the hypothesis. Therefore, <span class="math-container">$g(1) = 0$</span>.</p> <p>Since <span class="math-container">$g(0) = g(1) = 0, g$</span> is differentiable on <span class="math-container">$(0, 1)$</span> and continuous on <span class="math-container">$[0, 1]$</span>, from Rolle's theorem it follows that there exists <span class="math-container">$c \in (0, 1)$</span> such that <span class="math-container">$g'(c) = 0 \Leftrightarrow H(c) = 0$</span>, thus concluding our proof.</p> <p><strong>Note:</strong> As a general rule of thumb, whenever faced with such problems where an initial condition between <span class="math-container">$\int_0^1f(x)dx$</span> and <span class="math-container">$\int_0^1xf(x)dx$</span> is given, the idea is to follow the same approach until finding <span class="math-container">$H(x)$</span>. Afterward, it's all a matter of finding a helpful function to apply Rolle's theorem.</p>
4,535,957
<blockquote> <p>Please vote to close this question. It's really dumb as when I was reading the Peano axioms, axiom 8 didn't register. Don't waste your time reading this question.... I also cannot delete it (I have tried but it won't let me).</p> </blockquote> <p><span class="math-container">$1$</span> is defined as <span class="math-container">$S(0).$</span></p> <p>An essentially equivalent question to the one I am asking is, &quot;Why isn't <span class="math-container">$0\neq 1$</span> a Peano axiom?&quot;</p> <p>If we accept <a href="https://en.wikipedia.org/wiki/Peano_axioms#Historical_second-order_formulation" rel="nofollow noreferrer">Wikipedia's <span class="math-container">$9$</span> Peano axioms</a>, as well as the two axioms of addition, how do we show then that <span class="math-container">$0\neq 1?$</span> Or do we need to accept the multiplication axioms <em>as well</em> in order to show that <span class="math-container">$0\neq 1?$</span> Or is there <em>another</em> axiom I am not seeing that you need, in order to show that <span class="math-container">$0\neq 1?$</span></p> <p>Basically, as far as I can tell, none of the axioms tell us that we cannot have <span class="math-container">$0=1=2=3=\ldots.$</span></p> <p>I think that what I am referring to is (essentially?) the group (semigroup? ring?) <span class="math-container">$\langle \{0\}, + \rangle\ $</span> along with <span class="math-container">$=$</span> defined as an equivalence relation, and an axiom that enables substitution.</p> <p><span class="math-container">$$$$</span> <a href="https://math.stackexchange.com/questions/2222836/0-neq-1-not-provable-in-axiomatic-arithmetic">$0 \neq 1$ not provable in axiomatic arithmetic?</a></p> <p>I looked here, but I'm not sure the question is the same as my one and I don't understand the answers. I also don't think the answers relate to wikipedia's version of the Peano axioms, but maybe I am wrong?</p>
Jay
9,814
<p>Axiom 8 in the Wikipedia list of axioms says that <span class="math-container">$0$</span> is not the successor of any natural number. Since <span class="math-container">$1 = S(0)$</span>, we have <span class="math-container">$1 \neq 0$</span>.</p>
4,535,957
<blockquote> <p>Please vote to close this question. It's really dumb as when I was reading the Peano axioms, axiom 8 didn't register. Don't waste your time reading this question.... I also cannot delete it (I have tried but it won't let me).</p> </blockquote> <p><span class="math-container">$1$</span> is defined as <span class="math-container">$S(0).$</span></p> <p>An essentially equivalent question to the one I am asking is, &quot;Why isn't <span class="math-container">$0\neq 1$</span> a Peano axiom?&quot;</p> <p>If we accept <a href="https://en.wikipedia.org/wiki/Peano_axioms#Historical_second-order_formulation" rel="nofollow noreferrer">Wikipedia's <span class="math-container">$9$</span> Peano axioms</a>, as well as the two axioms of addition, how do we show then that <span class="math-container">$0\neq 1?$</span> Or do we need to accept the multiplication axioms <em>as well</em> in order to show that <span class="math-container">$0\neq 1?$</span> Or is there <em>another</em> axiom I am not seeing that you need, in order to show that <span class="math-container">$0\neq 1?$</span></p> <p>Basically, as far as I can tell, none of the axioms tell us that we cannot have <span class="math-container">$0=1=2=3=\ldots.$</span></p> <p>I think that what I am referring to is (essentially?) the group (semigroup? ring?) <span class="math-container">$\langle \{0\}, + \rangle\ $</span> along with <span class="math-container">$=$</span> defined as an equivalence relation, and an axiom that enables substitution.</p> <p><span class="math-container">$$$$</span> <a href="https://math.stackexchange.com/questions/2222836/0-neq-1-not-provable-in-axiomatic-arithmetic">$0 \neq 1$ not provable in axiomatic arithmetic?</a></p> <p>I looked here, but I'm not sure the question is the same as my one and I don't understand the answers. I also don't think the answers relate to wikipedia's version of the Peano axioms, but maybe I am wrong?</p>
5xum
112,884
<p>You only need two of the axioms:</p> <ol> <li><em>Axiom 1</em>: <span class="math-container">$0$</span> is a natural number.</li> <li><em>Axiom 8</em>: For every natural number <span class="math-container">$n$</span>, <span class="math-container">$S(n)=0$</span> is false.</li> </ol> <p>In particular, the statement of axiom <span class="math-container">$8$</span> this is also true for <span class="math-container">$n=0$</span> (because, from Axiom 1, we know <span class="math-container">$n$</span> is a natural number). Therefore, <span class="math-container">$S(0)=0$</span> is false.</p>
2,557,608
<p>Let $X$ be integer valued with Characteristic Function $\phi$. How to show that </p> <p>$\ P(|X| = k)= \frac{1}{2\pi }\int _{-\pi} ^{\pi} e^{ikt} \phi(t) dt$</p> <p>$P(S_n =k) =\frac{1}{2\pi }\int _{-\pi} ^{\pi} e^{ikt} (\phi(t))^n dt $</p>
Jean Marie
305,862
<p>Using the formulas of half-angle substitution, also called Weierstrass substitution formulas,</p> <p>$$\cos(u)=\dfrac{1-t^2}{1+t^2} , \ \sin(u)=\dfrac{2t}{1+t^2} \ \ \ \text{where} \ \ \ t:=\tan(\tfrac{u}{2}),$$</p> <p>one has to prove, setting $a=\tan(\alpha/2)$ and $b=\tan(\beta/2)$, that :</p> <p>$$2\dfrac{2a}{1+a^2}+2\dfrac{2b}{1+b^2}=3 \dfrac{2a}{1+a^2}\dfrac{1-b^2}{1+b^2} +3 \dfrac{2b}{1+b^2}\dfrac{1-a^2}{1+a^2} \implies ab=\dfrac15$$</p> <p>(we have used $\sin(\alpha+\beta)=\sin(\alpha)\cos(\beta)+\sin(\beta)\cos(\alpha)$).</p> <p>The first expression, once we have multiplied both sides by $(1+a^2)(1+b^2)$ and cancelled by 2, gives:</p> <p>$$2a+2ab^2+2b+2ba^2 = 3a-3a^2b+3b-3a^2b$$</p> <p>$$\iff 5ab(a+b)=a+b$$</p> <p>giving indeed, <strong>under the supplementary assumption that $a+b \neq 0$</strong> </p> <blockquote> <p>$$ab=\tfrac15.$$</p> </blockquote> <p><strong>Remark:</strong> the overall advantage of formulas (*) is that they convert trigonometrical issues into fully algebraic formulations.</p>
4,610,640
<p>I have this equation :</p> <p><span class="math-container">$$\frac{d\alpha}{dz} = - \frac{dr}{dz} * \frac{\tan(\alpha)}r $$</span></p> <p>I searched for some similar examples but non of these equations was like this one. I'm confused about this one. As far as I know, I used RK4 for equations like this : <span class="math-container">$$y'(t) = F(t,y(t))$$</span></p> <p>Thank you for helping me !</p> <hr /> <p>Here's the context for the equation. <a href="https://i.stack.imgur.com/UsOkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsOkt.png" alt="enter image description here" /></a></p> <p>I just deleted the lambda part to make the equation easier. But I still don't figure out how to solve it!</p>
NameAlreadyUsed
1,129,057
<p>The way to proceed is to add and subtract the same quantity at the numerator:</p> <p><span class="math-container">$\frac{s+1}{s+2}=\frac{s+1+(1-1)}{s+2}=\frac{s+2-1}{s+2}=\frac{s+2}{s+2}-\frac{1}{s+2}=1-\frac{1}{s+2}$</span></p> <p>The procedure is justified by the fact that the two addends (1 and -1) cancel each other out and therefore the numerator remains unchanged.</p>
4,610,640
<p>I have this equation :</p> <p><span class="math-container">$$\frac{d\alpha}{dz} = - \frac{dr}{dz} * \frac{\tan(\alpha)}r $$</span></p> <p>I searched for some similar examples but non of these equations was like this one. I'm confused about this one. As far as I know, I used RK4 for equations like this : <span class="math-container">$$y'(t) = F(t,y(t))$$</span></p> <p>Thank you for helping me !</p> <hr /> <p>Here's the context for the equation. <a href="https://i.stack.imgur.com/UsOkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsOkt.png" alt="enter image description here" /></a></p> <p>I just deleted the lambda part to make the equation easier. But I still don't figure out how to solve it!</p>
Ryszard Szwarc
715,896
<p>Perhaps it is more visible on substituting <span class="math-container">$x=s+2$</span> (hopefully it is legal in elementary school math course). Then <span class="math-container">$${s+1\over s+2}={x-1\over x}$$</span> Now it is hard to resist from dividing each term in the numerator by <span class="math-container">$x.$</span> <span class="math-container">$${x-1\over x}=1-{1\over x}=1-{1\over s+2}$$</span></p>
3,560,378
<p>Let <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> be a convex function. Let <span class="math-container">$x,y,z \in \mathbb{R}$</span> and <span class="math-container">$a,b,c \geq 0$</span> such that <span class="math-container">$a+b+c=1$</span>. We want to show that <span class="math-container">$f(ax+by+cz) \leq af(x)+bf(y)+cf(z)$</span>.</p> <p>Obviously, if <span class="math-container">$c=0$</span>, the result is equivalent to the definition of convexity. I think I should use the two variable case to prove the the three variable case (and ideally generalize to <span class="math-container">$n$</span> variables), but I don't see how to do it.</p>
Riley
565,701
<p>If <span class="math-container">$c &gt; 0$</span> then <span class="math-container">$b + c &gt; 0$</span> so <span class="math-container">$$ by + cz = (b + c)\left(\frac{b}{b + c}y + \frac{c}{b + c}z\right), $$</span> and <span class="math-container">$a + (b + c) = 1$</span>. The motivation for this is that if you want to reduce to (usual) convexity then you should write <span class="math-container">$ax + by + cz = ax + (1-a)p$</span> where <span class="math-container">$p$</span> is some other point. </p>
872,275
<p>here's the question, how can I solve this:</p> <p>$$\lim_{x \rightarrow \infty} x\sin (1/x) $$</p> <p>Now, from textbooks I know it is possible to use the following substitution $x=1/t$, then, the ecuation is reformed in the following way </p> <p>$$\frac{\sin t}{t}$$</p> <p>then, and this is what I really can´t understand, textbook suggest find the limit as $t\to0^+$ (what gives you 1 as result)</p> <p>Ok, I can't figure out WHY finding that limit as $t$ approaches $0$ from the right gives me the answer of the limit in infinity of the original formula. I think I can't understand what implies the substitution.</p> <p>Better than an answer, I need an explanation.</p> <p>(Sorry If I wrote something incorrectly, the english is not my original language) Really thanks!!</p>
2'5 9'2
11,123
<p>$$\lim_{x\to\infty}x\sin(1/x)$$ is like the limit of this sequence: $$1\sin(1/1), 10\sin(1/10), 100\sin(1/100),\ldots$$ where we have inserted $x$ marching along from $1$ to $10$ to $100$ on its merry way to $\infty$. This is almost literally the same as $$\frac{1}{1}\sin(1), \frac{1}{0.1}\sin(0.1), \frac{1}{0.01}\sin(0.01),\ldots$$ which is an interpretation of $$\lim_{t\to0^+}\frac{1}{t}\sin(t)$$ with $t$ marching its way from $1$ down to $0.1$ down to $0.01$ on its merry way to $0$. So whatever the value of these limits are, $\lim\limits_{x\to\infty}x\sin(1/x)=\lim\limits_{t\to0^+}\frac{1}{t}\sin(t)$.</p>
102,406
<p>I have the following question: If $E$ is a ring spectrum, then a complex orientation of $E$ is an element of $E^2(\mathbb{C}P^{\infty})$ that is mapped to $1$ in $E^2(\mathbb{C}P^{1})$. I have read that there is a natural orientation on $MU$ the spectrum of complex bordism. Can you tell me, how this orientation is defined and why it is an orientation?</p>
Akhil Mathew
344
<p>A complex orientation on a homotopy commutative ring spectrum $E$ is sometimes defined to be an element in $\widetilde{E}^2(\mathbb{CP}^\infty)$ which restricts to the canonical element of $\widetilde{E}^2(S^2) = \pi_0 E $. </p> <p>In fact, there is another definition of an orientation, which generalizes better to other settings: a complex orientation on $E$ is an assignment of a Thom class for each complex vector bundle $V \to X$ (that is, a class in $\widetilde{E}( B(V), S(V))$ which restricts to the canonical generator on each copy of $(B(V_x), S(V_x)) \subset (B(V), S(V))$ for each $x \in X$. (In ordinary homology, this uniquely determines the Thom class, but in generalized homology, this is <em>additional data</em> rather than a merely condition to be satisfied.)</p> <p>For a complex orientation, this assignment is required to be functorial, and moreover multiplicative: Thom classes for vector bundles $V, W \to X$ determine a Thom class for $V \oplus W$ (because the Thom space of $V \oplus W$ is the smash product of the Thom spaces of $V, W$). If you think about it this way, it is easier to see that a complex orientation on $E$ is the same as a map of ring spectra $MU \to E$. In fact, a complex orientation on $E$ means that the <em>universal</em> vector bundle $\xi_n \to BU(n)$ has a complex orientation, which if you unwind is a map $\mathrm{Th}(\xi_n) \to E$. These give the spaces in the $MU$-spectrum and multiplicativity makes this a map $MU \to E$ (strictly speaking, one has to make some kind of a $\lim^1$-argument to make this rigorous). </p> <p>One reason that this is a more natural definition is that it generalizes to other types of bundles than complex bundles: one can talk about a real-oriented cohomology theory $E$ (which comes from a map $MO \to E$) or a spin-oriented cohomology theory (coming from a map $M \mathrm{Spin} \to E$), among other things. It takes a little work, though, to show that complex orientability is equivalent to the degeneration of the AHSS for $\mathbb{CP}^\infty$, but one direction is easy: if $E$ has a complex orientation in the sense of this answer, then we get a class in $\widetilde{E}^2(\mathbb{CP}^\infty)$ by regardeding $\mathbb{CP}^\infty$ as the Thom space of the tautological bundle over itself. </p> <p>You can read more about this in the <a href="http://people.virginia.edu/~mah7cd/Foundations/coctalos.pdf" rel="nofollow">course notes</a> for "Complex oriented cohomology theories and the language of stacks," among other places. I also found Jacob Lurie's notes on "<a href="http://www.math.harvard.edu/~lurie/252x.html" rel="nofollow">chromatic homotopy theory</a>" very helpful. </p>
102,406
<p>I have the following question: If $E$ is a ring spectrum, then a complex orientation of $E$ is an element of $E^2(\mathbb{C}P^{\infty})$ that is mapped to $1$ in $E^2(\mathbb{C}P^{1})$. I have read that there is a natural orientation on $MU$ the spectrum of complex bordism. Can you tell me, how this orientation is defined and why it is an orientation?</p>
Peter May
14,447
<p>I don't know how to comment rather than answer, but Craig, the equivalence is really not mysterious: the zero section of the universal complex line bundle gives an equivalence from $\mathbb{C}P^{\infty}$ to the total space of the unit disk bundle $D$. The total space $S$ of the unit sphere bundle is really just $S^{\infty}$ (think of the universal principal circle bundle), which is contractible, and the quotient map $D\to D/S = MU(1)$ is an equivalence. The equivalence $\mathbb{C}P^{\infty}\to MU(1)$ is the composite of these two equivalences.</p> <p>Let me add a conceptual aside about orientations of fibrations. Let $p\colon X\to B$ be a spherical fibration with fiber $S^n$, such as the fiberwise $1$-point compactification of an $n$-plane bundle and let $E$ be a ring spectrum. In ``Parametrized homotopy theory'', Johann Sigurdsson and I show how to make sense of the parametrized spectrum $X\wedge E$ over $B$ with fibers $E$. An orientation (in the classical cohomological sense) is the same thing as a trivialization of this parametrized fibration, that is, an equivalence from it to the parametrized spectrum $(B\times S^n)\wedge E$ over $B$. </p> <p>I should admit that my answer is digressive. Section 5 of my paper ``What are $E_{\infty}$-ring spectra'' relates universal $E$-orientations of $G$-bundles to maps of ring spectra $MG \to E$.</p>
2,675,954
<p>Fractals, when viewed as functions, are everywhere continuous and nowhere differentiable. Can this also be used as a definition for fractals?</p> <p>i.e. Are all fractals everywhere continuous and nowhere differentiable? And also: Are all functions that are everywhere continuous and nowhere differentiable fractals?</p>
ancient mathematician
414,424
<p>Note that $$ M:= \begin{bmatrix} yz-x^2&amp;zx-y^2&amp;xy-z^2\\ zx-y^2&amp;xy-z^2&amp;yz-x^2\\ xy-z^2&amp;yz-x^2&amp;zx-y^2 \end{bmatrix} $$ is the matrix of $2\times 2$ cofactors of the matrix: $$ N:= \begin{bmatrix} x &amp; y &amp; z\\ y &amp; z &amp; x\\ z&amp; x &amp; y \\ \end{bmatrix}. $$</p> <p>Then as usual $MN=(\det N) I$, so that $\det M \det N =(\det N)^3$. As $\det N\not=0$ (as a polynomial) we have that $\Delta =\det M= (\det N)^2$ is the perfect square of a polynomial of degree $3$ -- which is what was asked.</p> <p>But from this we can see everything: there is clearly a factor $(x+y+z)$ in $\det N$ and a factor $\frac{1}{2}( (x-y)^2 +(y-z)^2 +(z-x)^2))$ in $\det M$. Establishing the value of the constant is trivial.</p>
4,631,014
<blockquote> <p>Finding range of function</p> </blockquote> <blockquote> <p><span class="math-container">$\displaystyle f(x)=\cos(x)\sin(x)+\cos(x)\sqrt{\sin^2(x)+\sin^2(\alpha)}$</span></p> </blockquote> <p>I have use Algebric inequality</p> <p><span class="math-container">$\displaystyle -(a^2+b^2)\leq 2ab\leq (a^2+b^2)$</span></p> <p><span class="math-container">$\displaystyle (\cos^2(x)+\sin^2(x))\leq 2\cos(x)\sin(x)\leq \cos^2(x)+\sin^2(x)\cdots (1)$</span></p> <p>And</p> <p><span class="math-container">$\displaystyle -[\cos^2(x)+\sin^2(x)+\sin^2(\alpha)]\leq 2\cos(x)\sqrt{\sin^2(x)+\sin^2(\alpha)}\leq [\cos^2(x)+\sin^2(x)+\sin^2(\alpha)]\cdots (2)$</span></p> <p>Adding <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span></p> <p><span class="math-container">$\displaystyle -(1+1+\sin^2(\alpha))\leq 2\cos(x)[\sin(x)+\sqrt{\sin^2(x)+\sin^2(\alpha)}]\leq (1+1+\sin^2(\alpha))$</span></p> <p><span class="math-container">$\displaystyle -\bigg(1+\frac{\sin^2(\alpha)}{2}\bigg)\leq f(x)\leq \bigg(1+\frac{\sin^2(\alpha)}{2}\bigg)$</span></p> <p>I did not know where my try is wrong.</p> <p>Please have a look on it</p> <p>But actual answer is <span class="math-container">$\displaystyle -\sqrt{1+\sin^2(\alpha)}\leq f(x)\leq \sqrt{1+\sin^2(\alpha)}$</span></p>
Yuri Negometyanov
297,350
<p>Let <span class="math-container">$p=\sin^2\alpha,\; y=\cos 2x,\;$</span> then <span class="math-container">$$\cos x \sin x+\cos x\sqrt{\sin^2x+\sin^2\alpha\mathstrut} =\dfrac{\pm\sqrt{1-y^2}\pm\sqrt{1-y^2+2p(1+y)\mathstrut}}2 = f(y),$$</span> <span class="math-container">$$2f'(y) = \mp\dfrac y{\sqrt{1-y^2}}\pm\dfrac{p-y}{\sqrt{1-y^2+2p(1+y)}}.$$</span> The set of possible extremes of <span class="math-container">$f(y)$</span> corresponds with the expression<br /> <span class="math-container">$$\left[\begin{align} &amp;y^2-1=0\\[4pt] &amp;y^2-2py-2p-1=0\\[4pt] &amp;y^2(1+2py-y^2+2p)=(p^2-2py+y^2)(1-y^2), \end{align}\right.,$$</span> <span class="math-container">$$\left[\begin{align} &amp;y=\pm1\\[4pt] &amp;y=p\pm\sqrt{p^2+2p+1}\\[4pt] &amp;(2+p)y^2+2y-p=0, \end{align}\right.$$</span> <span class="math-container">$$y_m\in\left\{-1,1, \dfrac p{2+p}\right\},\quad f(y_m)\in\left\{0, \pm\sqrt{p\mathstrut}, \pm \sqrt{1+p} \right\},$$</span> <span class="math-container">$$\mathbb{f(y)\in\left[-\sqrt{1+\sin^2\alpha}, \sqrt{1+\sin^2\alpha}\right]}.$$</span></p>
227,732
<p>Are there any general relations between the eigenvalues of a matrix $M$ and those of $M^2$?</p>
Hagen von Eitzen
39,174
<p>Of course if $\lambda$ is an eigenvalue of $M$, then $\lambda^2$ is an eigenvalue of $M^2$: $$M v =\lambda v\quad\Rightarrow\quad M^2v=M\lambda v=\lambda M v =\lambda^2 v.$$</p>
4,477,354
<p>Fourier series for function <span class="math-container">$f(x)=c^x$</span>, <span class="math-container">$c\in\mathbb Z$</span>, <span class="math-container">$c&gt;1$</span> on interval <span class="math-container">$(a,b)$</span>, where <span class="math-container">$a,b\in\mathbb R$</span>, <span class="math-container">$a&lt;b$</span>.</p> <p>Can I use the next formulas for this case?: <span class="math-container">$$f(x)=\frac{a_0}{2}+\sum\limits_{n=1}^{+\infty}\left[a_n\cos\left(\frac{\pi nx}{l}\right)+b_n\sin\left(\frac{\pi nx}{l}\right)\right],$$</span> <span class="math-container">$$a_n=\frac{1}{l}\int\limits_a^bf(x)\cos\left(\frac{\pi nx}{l}\right)dx,$$</span> <span class="math-container">$$b_n=\frac{1}{l}\int\limits_a^bf(x)\sin\left(\frac{\pi nx}{l}\right)dx,$$</span> where <span class="math-container">$l=(b-a)/2$</span>.</p> <p>Particularly I need a Fourier series for function <span class="math-container">$f(x)=2^x$</span> on interval <span class="math-container">$(0;1)$</span>.</p>
Blitzer
429,808
<p>Lemma: For <span class="math-container">$x&gt;0$</span> and <span class="math-container">$i=1,2,\dots$</span>, <span class="math-container">$$x^i+i \geqslant ix+1$$</span> Proof of lemma:</p> <p>If <span class="math-container">$x \leqslant 1$</span> then <span class="math-container">$$1+x+x^2+\cdots +x^{i-1} \leqslant 1+1+1+\cdots + 1 = i$$</span> <span class="math-container">$$\frac{1-x^i}{1-x} \leqslant i$$</span> <span class="math-container">$$1-x^i\leqslant i-ix$$</span> <span class="math-container">$$x^i+i \geqslant ix+1$$</span></p> <p>On the other hand, if <span class="math-container">$x&gt;1$</span> then, by the binomial theorem, <span class="math-container">$$x^i=(1+(x-1))^i=1+i(x-1)+\cdots$$</span> where all of the terms are positive, so <span class="math-container">$$x^i \geqslant 1+i(x-1)$$</span> <span class="math-container">$$x^i+i \geqslant ix+1$$</span></p> <p>We now set <span class="math-container">$x=x_i$</span> in the lemma and sum, to get <span class="math-container">$$\sum_{i=1}^n (x_i^i+i) \geqslant \sum_{i=1}^n (ix_i+1)$$</span> <span class="math-container">$$\sum_{i=1}^n x_i^i + \frac{n(n-1)}2 \geqslant \sum_{i=1}^n ix_i$$</span></p> <p>as required.</p>
627,718
<p>I need help evaluating the integrals in Fourier Series.</p> <p>For example, for the function <span class="math-container">$\cos^{2}x$</span>, I can evaluate <span class="math-container">$a_0$</span>, <span class="math-container">$a_n$</span>, and <span class="math-container">$b_n$</span>, where <span class="math-container">$a_n$</span> is the coefficients of the cosine terms and <span class="math-container">$b_n$</span> the coefficients of the sine terms. In this case, because it is an even function, only cosine terms will exist, and the integral for calculating it will become:</p> <p><span class="math-container">$$a_n=(1/2 {\pi})\int_{-\pi}^{\pi} cos^2xcosnx dx$$</span></p> <p>How can this be solved?</p> <p>Similarly, could someone please show the step-by-step calculation for the Fourier Series of <span class="math-container">$\cos^nx$</span>?</p>
user71352
71,352
<p>Hint:</p> <p>$\cos^{2}(x)=\frac{1+\cos(2x)}{2}$ </p> <p>and </p> <p>$\cos(x)\cos(y)=\frac{1}{2}\big(\cos(x-y)-\cos(x+y)\big)$</p>
23,980
<p>I am calculating huge data files with an external program. I would like then to import the data into Mathematica for analysis. The files are 2 columns and up to many millions of rows. </p> <p>So for small data files I have just been using:</p> <pre><code>dataTable = Import["data.txt", "Table"]; </code></pre> <p>However once the files gets so large (into the millions), Mathematica and my computer slow down considerably. Now, I don't really care about most of the data in these files and in Mathematica I end up only using several thousand entries. So my question is, can I import a random sampling from the large data file (say 10000 elements only) instead of importing the entire file? So if I had a file with 10^6 rows, I would like to import just 10^4 of those rows (preferably randomly).</p>
BoLe
6,555
<p>Via @chuy and @rcollyer with sequential skipping and reading of records, and without backtracking.</p> <pre><code>( If[# != 1, Skip[s, Record, # - 1], Null]; ReadList[s, Number, 2]) &amp; /@ Differences[p~Prepend~0] </code></pre> <p>List <code>p</code> of lines you want to read, sorted; eg. some random ten from a hundred:</p> <pre><code>p = Sort[RandomSample[Range[100], 10]]; </code></pre> <p>Your file opened as a stream, eg. <code>s = OpenRead["data.txt"]</code>.</p>
2,062,638
<blockquote> <p>Consider a curve $\gamma(t): [a,b] \to \mathbb{R}^n$. The curve $$-\gamma(t)=\gamma(a+b-t) \,\,\,\,\,\,\,\,\,\,\,\,\,\, t \in [a,b]$$</p> <p>is called the "reverse" curve (or path) of $\gamma(t)$.</p> </blockquote> <p>This definition is clear, but how is the "reverse" path defined in the following case?</p> <p>Take two regular curves $\gamma_1:[a,b] \to \mathbb{R}^n$ and $\gamma_2:[c,d] \to \mathbb{R}^n$ with $\gamma_1(b)=\gamma_2(c)$ and define $\gamma:[a,d] \to \mathbb{R}^n$ as $$\gamma(t)=\begin{cases} \gamma_1(t) &amp; t\in [a,b] \\ \gamma_2(t) &amp;t \in [c,d] \end{cases}$$</p> <p>Now, what is $-\gamma(t)$?</p> <p>I think there are two possibilities:</p> <ol> <li>$$-\gamma(t)=\begin{cases} \gamma_1(a+b-t) &amp; t\in [a,b] \\ \gamma_2(a+b-t) &amp;t \in [c,d] \end{cases}$$</li> <li>$$-\gamma(t)=\begin{cases} \gamma_1(a+b-t) &amp; t\in [a,b] \\ \gamma_2(c+d-t) &amp;t \in [c,d] \end{cases}$$</li> </ol>
Alex M.
164,025
<p>You want to map $a$ to $\gamma_2 (d)$, $b$ and $c$ to $\gamma_2 (c) = \gamma_1 (b)$ and $d$ to $\gamma_1 (a)$. Looking for a linear map $\varphi(t) = At + B$ taking $a$ and $b$ to $d$ and respectively $c$ we find $\begin{cases} Aa + B = d \\ Ab + B = c \end{cases}$, whence $A = \frac {d-c} {a-b}$ and $B = \frac {ac-bd} {a-b}$, so that $\varphi (t) = \frac {d-c} {a-b}t + \frac {ac-bd} {a-b}$.</p> <p>The reverted curve is then $-\gamma : [a,b] \cup [c,d] \to \Bbb R^n$ given by</p> <p>$$-\gamma (t) = \begin{cases} \gamma_2 \big( \varphi (t) \big), &amp; t \in [a,b] \\ \gamma_1 \big( \varphi^{-1} (t) \big), &amp; t \in [c,d] \end{cases} .$$</p>
745,382
<p>I'm interested in hypercomplexes, or number systems with many square roots of $-1$. Now, I know that <a href="http://en.wikipedia.org/wiki/Quaternion" rel="nofollow">quaternions</a> are non-commutative, but associative. I'm wondering if it's possible to have a number system with more "elements" (or square roots of $-1$) that remains associative.</p> <p>My understanding is that hypercomplex systems lose many properties in order to keep division. So I'm wondering, if we discard division, can we keep the associative property and add more square roots of $-1$?</p> <p>In other words, <strong>Can we have a number system with $2^n$ square roots of $-1$ that is associative, for arbitrary $n$?</strong></p>
hmakholm left over Monica
14,366
<p>Depends on what you mean by "number system". You can certainly construct the ring</p> <p>$$ \mathbb R[X_1,X_2,\ldots, X_n]/\langle X_1^2-1, X_2^2-1, \ldots, X_n^2 - 1\rangle $$</p> <p>in which $\prod_{i\in A} X_i$ is a square root of $-1$ for each of the $2^{n-1}$ odd-sized subsets $A\subseteq \{1,\ldots,n\}$. It will be associative and commutative.</p> <p>But it will be horribly division-less.</p> <p>In fact, for any commutative ring, if there are two different elements with the same square that are <em>not</em> each other's negative, it will contain <em>zero divisors</em>, because $$ (a+b)(a-b) = a^2 - b^2 = 0 $$ if $a^2=b^2$, and unless $a$ and $b$ are the same or the negative of each other, both of $a+b$ and $a-b$ will be nonzero.</p>
2,722,170
<p>I am a grade 11 student and I have to learn vectors for the IB exam. I know that to find the distance of a vector between two points in a 3 dimensional space for lets say point A and B, then you would use the formula \begin{align*} |AB|=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2}. \end{align*} I am having difficulty visualizing and understanding why the coordinates of the points are being subtracted here. I really appreciate the feedback. Sorry for the low quality of the link, but I guess I don't have enough reputation yet to insert images directly. Also, in the formula all the subscripts should be reversed. So x1 becomes x2 and vice versa.</p>
DanielWainfleet
254,665
<p>Assuming $P=(x,0)\ne (0,0)$ and $\theta \not \in \{0,\pi /2\} $, whether you return to $P$ does not depend on $x$ but on the angle $\psi$ between the $X$-axis and the segment joining $P$ to the first point chosen on $L.$ Because we can multiply all distances by a positive constant without affecting the outcome.</p> <p>Using co-ordinates we can show by computing the successive points in the sequence that if you return to $P$ then there is a $2$-variable polynomial $f$ such that $f( \cos \psi,\sin \psi)=0$ where the co-efficients of $f$ belong to the field $F$ generated by $\{\cos \theta,\sin \theta\}.$</p> <p>I haven't checked but I think that $\deg (f)\ne 0.$ This implies that $\sin^2 \psi$ is algebraic over $F$, and $F$ is countable, so there are only countably many possible $\psi$ for which you return to $P$ (for a given $\theta $) but uncountably many $\psi$ where you don't.</p> <p>Perhaps there is an ingenious geometric solution ?</p>
131,320
<p>Let $x=[a_0;a_1,a_2]$ be shorthand notation for the continued fraction $$x=a_0+\frac{1}{a_1+\frac{1}{a_2}}.$$ Then every $x\in\mathbb{Q}$ can be represented as a finite continued fraction $[a_0;a_1,a_2,\dots,a_n]$, where $a_0\in\mathbb{Z}$ and $a_1,a_2,\dots,a_n\in\mathbb{N}$. Moreover, let $$\begin{matrix} p_0=a_0 &amp; q_0=1\\ p_1=a_0a_1+1 &amp; q_1=a_1\\ p_k=a_kp_{k-1}+p_{k-2} &amp; q_k=a_kq_{k-1}+q_{k-2} \end{matrix}$$ and define $$C_k=\frac{p_k}{q_k}$$to be the $k$th <em>convergent</em> of $x$.</p> <p>I am trying to show that for any finite continued fraction, all even-numbered convergents are less than the fraction's value.</p> <p>For example, let $x=43/30$. Then $$x=1+\frac{1}{2+\frac{1}{3+\frac{1}{4}}},$$ or $x=[1;2,3,4]$. It follows that $C_0=1$, $C_1=3/2$, $C_2=10/7$, and $C_3=43/30$, and both $C_0$ and $C_2$ are less than $x$, as stated.</p> <p>How can I prove this by induction?</p>
Did
6,179
<p>Write $x=[a_0;a_1,a_2,\ldots]$ as $x=F(a_0,a_1,a_2,\ldots)$, and let us first prove an auxiliary result;</p> <blockquote> <p>The function $(a_k)_{k\geqslant0}\mapsto F((a_k)_{k\geqslant0})$ is increasing with respect to its even-numbered arguments $a_{2k}$ and decreasing with respect to its odd-numbered arguments $a_{2k+1}$.</p> </blockquote> <p>Note that $F((a_k)_{k\geqslant0})$ is increasing with respect to $a_0$. Next, assume the sense of variation of $F$ with respect to its $i$th argument is known, namely that $F((a_k)_{k\geqslant0})$ is in/de-creasing with respect to $a_i$. Then $F((b_k)_{k\geqslant0})$ is in/de-creasing with respect to $b_i$, hence $1/F((b_k)_{k\geqslant0})$ is de/in-creasing with respect to $b_i$, which proves that $F((a_k)_{k\geqslant0})=a_0+1/F((b_k)_{k\geqslant0})$ for $b_k=a_{k+1}$ for every $k\geqslant0$, is de/in-creasing with respect to $b_i=a_{i+1}$. Thus a recursion over $i\geqslant0$ yields the result claimed above.</p> <p>Let us now use this to answer the question asked. Every convergent of $x$ is some $x^{(i)}=[a_0;a_1,a_2,\ldots,a_{i}]$ and $x=[a_0;a_1,a_2,\ldots,a'_{i}]$ where $a'_{i}=[a_i;a_{i+1},a_{i+2},\ldots]$. Since $a_i\lt a'_i$ and every other argument in $x$ and $x^{(i)}$ coincide, the auxiliary result above yields $x^{(i)}\lt x$ for every even $i$ and $x^{(i)}\gt x$ for every odd $i$.</p> <p>Finally the two crucial facts are that $F((a_k)_{k\geqslant0})$ is (i) increasing with respect to $a_0$ and (ii) a decreasing function of $F((b_k)_{k\geqslant0})$, where $(b_k)_{k\geqslant0}$ is an increasing function of $(a_k)_{k\geqslant1}$.</p>
131,320
<p>Let $x=[a_0;a_1,a_2]$ be shorthand notation for the continued fraction $$x=a_0+\frac{1}{a_1+\frac{1}{a_2}}.$$ Then every $x\in\mathbb{Q}$ can be represented as a finite continued fraction $[a_0;a_1,a_2,\dots,a_n]$, where $a_0\in\mathbb{Z}$ and $a_1,a_2,\dots,a_n\in\mathbb{N}$. Moreover, let $$\begin{matrix} p_0=a_0 &amp; q_0=1\\ p_1=a_0a_1+1 &amp; q_1=a_1\\ p_k=a_kp_{k-1}+p_{k-2} &amp; q_k=a_kq_{k-1}+q_{k-2} \end{matrix}$$ and define $$C_k=\frac{p_k}{q_k}$$to be the $k$th <em>convergent</em> of $x$.</p> <p>I am trying to show that for any finite continued fraction, all even-numbered convergents are less than the fraction's value.</p> <p>For example, let $x=43/30$. Then $$x=1+\frac{1}{2+\frac{1}{3+\frac{1}{4}}},$$ or $x=[1;2,3,4]$. It follows that $C_0=1$, $C_1=3/2$, $C_2=10/7$, and $C_3=43/30$, and both $C_0$ and $C_2$ are less than $x$, as stated.</p> <p>How can I prove this by induction?</p>
robjohn
13,854
<p>Given that $$ \begin{array}{c} p_0=a_0&amp;\text{and}&amp;q_0=1\\ p_1=a_0a_1+1&amp;\text{and}&amp;q_1=a_1\\ p_k=a_kp_{k-1}+p_{k-2}&amp;\text{and}&amp;q_k=a_kq_{k-1}+q_{k-2}\tag{1} \end{array} $$ notice that $$ p_1q_0-p_0q_1=1\tag{2} $$ and $$ \begin{align} p_{k+1}q_k-p_kq_{k+1} &amp;=(a_kp_k+p_{k-1})q_k-p_k(a_kq_k+q_{k-1})\\ &amp;=-(p_kq_{k-1}-p_{k-1}q_k)\tag{3} \end{align} $$ Equations $(2)$ and $(3)$ show that $$ p_{k+1}q_k-p_kq_{k+1}=(-1)^k\tag{4} $$ Thus, $$ \frac{p_{k+1}}{q_{k+1}}-\frac{p_k}{q_k}=\frac{(-1)^k}{q_kq_{k+1}}\text{ and }\frac{p_k}{q_k}-\frac{p_{k-1}}{q_{k-1}}=\frac{(-1)^{k-1}}{q_{k-1}q_k}\tag{5} $$ adding these equations yields $$ \frac{p_{k+1}}{q_{k+1}}-\frac{p_{k-1}}{q_{k-1}}=(-1)^{k-1}\frac{q_{k+1}-q_{k-1}}{q_{k-1}q_kq_{k+1}}\tag{6} $$ Since $\{q_k\}$ is an increasing sequence, $(6)$ is positive for odd $k$ (even convergents) and negative for even $k$ (odd convergents).</p> <p>Therefore, $\left\{\dfrac{p_{2k}}{q_{2k}}\right\}$ is increasing and $\left\{\dfrac{p_{2k+1}}{q_{2k+1}}\right\}$ is decreasing.</p>
217,580
<p>Consider a compact manifold $M$ with boundary and corner. As an example, we could have the cube $\{(x_1, x_2,..x_n) \in \mathbb{R}^n : x_i \in [0,1]\}$. We could very well define the Laplacian $\Delta$ on such an $M$ with Dirichlet or Neumann boundary conditions. In such situations, are the eigenfunctions of the Laplacian smooth? It seems to me that this requires some sort of elliptic regularity theory for singular spaces (maybe there is an elliptic regularity theory for manifolds with Lipschitz boundaries). A reference would be highly appreciated.</p>
Coffee
40,090
<p>Let me add some other references about elliptic regularity on singular manifolds:</p> <p>1) B.-W. Schulze and his co-authors have developed a comprehensive calculus for elliptic pseudo-differential operators (in particular differential operators) on singular spaces, including elliptic regularity and Fredholm properties:</p> <p>1.1) For manifolds with conical singularities and manifolds with edges consider the following books:</p> <p>B.-W. Schulze. Boundary Value Problems and Singular Pseudo-Differential Operators. J. Wiley, Chichester, 1998. </p> <p>Ju.V. Egorov and B.-W. Schulze. Pseudo-Differential Operators, Singularities, Applications. Birkhäuser Verlag, Basel, 1997</p> <p>1.2) For manifold with corners consider the following paper:</p> <p>Schulze, Bert-Wolfgang. "The Mellin pseudo-differential calculus on manifolds with corners." Symposium “Analysis on Manifolds with Singularities”, Breitenbrunn 1990. Vieweg+ Teubner Verlag, 1992.</p> <p>1.3) For manifolds with corner and edges the following is an upgraded version of the last paper:</p> <p>B.-W. Schulze, Operators with symbol hierarchies and iterated asymptotics, Publications of RIMS, Kyoto University {38}, 4 (2002), 735-802</p> <p>1.4) For a more geometric treatment of elliptic operator on singular spaces consider: </p> <p>Nazaikinskii, V. E., Savin, A. Y., Schulze, B. W., &amp; Sternin, B. Y. (2005). Elliptic theory on singular manifolds. CRC Press.</p> <p>In these papers/books they approach the problem with the goal of obtaining an algebra that contains elliptic operators and their parametrices (inverses module compact operators). If you are interested in a particular class of operators and need a concrete theory for that specific class you should also check the books of Mazya/Rossman/Kozlov:</p> <p>Mazya, V. G., and J. Rossmann. Elliptic equations in polyhedral domains. No. 162. American Mathematical Soc., 2010. (this book might be particularly useful for the Laplacian on a cube)</p> <p>Also for eigenvalues and spectral theory check this one:</p> <p>Kozlov, Vladimir, Vladimir G. Mazí̂à, and Jürgen Rossmann. Spectral problems associated with corner singularities of solutions to elliptic equations. No. 85. American Mathematical Soc., 2001.</p>
3,507,927
<p>Is there any simplification by the way of standard properties of Laplace Transform for this? <span class="math-container">$$L_{s} [t f(t) f'(t)]$$</span></p> <p>where <span class="math-container">$f'(t)$</span> is <span class="math-container">$\frac{d}{dt} f(t)$</span></p> <p>Alternately, is there a simplification of <span class="math-container">$L_{s} [f(t) f'(t)]$</span>, so that I can compute my original expression by computing the derivative <span class="math-container">$-\frac{d}{ds} L_{s} [f(t) f'(t)]$</span> ?</p>
GFauxPas
173,170
<p><span class="math-container">$$\int_0^{+\infty} f'(t)f(t) e^{-st} \, \mathrm d t = I(s)$$</span></p> <p>Define:</p> <p><span class="math-container">$$u(t)=f(t)e^{-st}$$</span> <span class="math-container">$$u'(t)=f'(t)e^{-st}-s e^{-st} f(t)$$</span> <span class="math-container">$$v'(t)=f'(t)$$</span> <span class="math-container">$$v(t)=f(t)$$</span></p> <p>Integrate by parts: <span class="math-container">$\int v'u=vu-\int vu'$</span>:</p> <p><span class="math-container">$$I(s)=f^2(t)e^{-st}\vert_{t=0}^{t \to + \infty} - I(s) + s \int_0^{+\infty} f^2(t)e^{-st} \, \mathrm d t$$</span></p> <p>Solve for <span class="math-container">$I(s)$</span>.</p>
267,947
<p>i want to use least square to find x and y that minimize the result of the following function for a series of points (xi,yi) -> (x1,y1), (x2,y2),...:</p> <p>note: y = f(x)</p> <pre><code>E(x,y) = SUM (y - ((mi*x) - (mi*xi)))^2 i -------------------- 1 + |mi|^2 </code></pre> <p>where "mi" is the slope of f(xi, yi)</p> <p>now my math is very rusty, and i think to use least square to find x and y that minimize the function above i need to find the derivative of the function above in respect to x and y independently where the derivative of each is equal to 0. </p> <p>can someone please show me how to perform derivation on the above function in respect to x and y please? </p> <p>thanks!</p> <p>edit: I did check a few examples on the web, such as the one given below by potato, but here is where i have difficulty: </p> <ol> <li>these examples always look for <em>m</em> and <em>b</em>, thing is in my case, i already have m and b, what i need is the x that will give the best curve. in short, the curve im looking for has to be a point on the function f(x) with the optimized slope.</li> <li>This is due to my own shortcomings in calculus, but all examples give are only numerators, but the example i have above contains a denominator. Can someone help me understand how to derive with a denominator please?</li> </ol> <p>thanks!</p>
Ittay Weiss
30,953
<p>This is a very classical problem and you will find many available sources online. Anything elementary on parameter estimation will include an analysis of a problem very much like what you describe (if not identical).</p> <p>You do need to keep in mind that there are basically two common approaches (one more common than the other): Bayesian and non-Bayesian. Look introductory materia on "data analysis" for the latter or add "Bayesian" for the former. </p> <p>A short summary of what you will find is that the two approaches quite often give very similar answers but not always. This is a subject of at times a heated debate in modern statistics. </p>
267,947
<p>i want to use least square to find x and y that minimize the result of the following function for a series of points (xi,yi) -> (x1,y1), (x2,y2),...:</p> <p>note: y = f(x)</p> <pre><code>E(x,y) = SUM (y - ((mi*x) - (mi*xi)))^2 i -------------------- 1 + |mi|^2 </code></pre> <p>where "mi" is the slope of f(xi, yi)</p> <p>now my math is very rusty, and i think to use least square to find x and y that minimize the function above i need to find the derivative of the function above in respect to x and y independently where the derivative of each is equal to 0. </p> <p>can someone please show me how to perform derivation on the above function in respect to x and y please? </p> <p>thanks!</p> <p>edit: I did check a few examples on the web, such as the one given below by potato, but here is where i have difficulty: </p> <ol> <li>these examples always look for <em>m</em> and <em>b</em>, thing is in my case, i already have m and b, what i need is the x that will give the best curve. in short, the curve im looking for has to be a point on the function f(x) with the optimized slope.</li> <li>This is due to my own shortcomings in calculus, but all examples give are only numerators, but the example i have above contains a denominator. Can someone help me understand how to derive with a denominator please?</li> </ol> <p>thanks!</p>
Qiaochu Yuan
232
<p>Here is an account of what I understand to be the Bayesian approach, which I learned from <a href="http://books.google.com/books/about/Probability_Theory.html?id=tTN4HuUNXjgC" rel="nofollow">Jaynes</a>. Having read Jaynes, I am currently deeply suspicious of non-Bayesian approaches. </p> <p>We'll restrict our attention to the slightly simpler case of a biased coin. Suppose the coin has some unknown probability $p$ of turning up heads, hence $1 - p$ of turning up tails. The first question is what your <a href="http://en.wikipedia.org/wiki/Prior_probability" rel="nofollow">priors</a> are regarding the distribution of possible values of $p$. For example, if you are already 100% confident that $p = \frac{1}{2}$, then no amount of evidence can shift this opinion. (This is why it is dangerous for Bayesians to be 100% confident of anything; see also <a href="http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/" rel="nofollow">0 And 1 Are Not Probabilities</a>.)</p> <p>For simplicity, we'll use a uniform prior: that is, we'll assume initially that $p$ is equally likely to be any real number in $[0, 1]$. Now suppose we flip the coin $n$ times and observe $k$ heads and $n - k$ tails. Then by <a href="http://en.wikipedia.org/wiki/Bayes%27_theorem" rel="nofollow">Bayes' theorem</a>, the posterior distribution of $p$ has probability density function</p> <p>$$\frac{ {n \choose k} x^k (1 - x)^{n-k} }{ \int_0^1 {n \choose k} x^k (1 - x)^{n-k} \, dx }.$$</p> <p>This is a <a href="http://en.wikipedia.org/wiki/Dirichlet_distribution" rel="nofollow">Dirichlet distribution</a>. From here, to obtain a <a href="http://en.wikipedia.org/wiki/Maximum_likelihood" rel="nofollow">maximum likelihood estimate</a> we need to find $x$ maximizing $x^k (1 - x)^{n-k}$. Taking logarithms and then derivatives, we find unsurprisingly that the maximum occurs when $x = \frac{k}{n}$. Note, however, that reporting only the maximum likelihood estimate is throwing away most of the information contained in the Dirichlet distribution, e.g. its variance. By doing more computations with the Dirichlet distribution you can write down, for example, the probability that $p$ is within a standard deviation of $\frac{k}{n}$. </p> <p>In practice, the uniform prior is also unlikely to be an accurate description of your actual prior knowledge. </p>
2,652,662
<blockquote> <p>Let $f:[a,b] \to \mathbb{R}$ be a bounded function. Define $A_{f,x} : (0, \infty) \to \mathbb{R}$ by $$A_{f,x} (r) = \text{diam}\left( f((x-r,x+r) \cap [a,b])\right).$$ Show that for all $\epsilon &gt;0$ the set $\{x\in [a,b] : \lim_{r\to 0^+} A_{f,x} (r)\geq \epsilon \}$ is a closed set.</p> </blockquote> <p>I showed that $f$ is continuous at $x$ if and only if $\lim_{r\to 0^+} A_{f,x} (r) = 0$. I'm thinking about showing the set is closed by showing the image of the set is closed (since $f$ is continuous, the pre-image must be closed). Is this the right approach? If so/not, how should I proceed/conclude? (In this case, would the set of discontinuities of $f$ be a union of countably many closed sets?)</p>
drhab
75,923
<p>Let $C_1,C_2,C_3$ take values in $\{G,B\}$ such that e.g. $\{C_1=G\}$ denotes the event that the firstborn child is a girl. To keep things more general let $p$ denote the probability on a girl.</p> <p>1) To be found is: $$P(C_1=C_2=C_3=G\mid C_1=G)=P(C_2=C_3=G\mid C_1=G)=P(C_2=G\wedge C_3=G)=P(C_2=G)P(C_3=G)=p^2$$</p> <p>The second and third equality both rest on independence.</p> <p>2) To be found is:$$P(C_1=C_2=C_3=G\mid G\in\{C_1,C_2,C_3\})=$$$$P(C_1=C_2=C_3=G\wedge G\in\{C_1,C_2,C_3\})/P(G\in\{C_1,C_2,C_3\})=$$$$P(C_1=C_2=C_3=G)/[1-P(G\notin\{C_1,C_2,C_3\})]=\frac{p^3}{1-(1-p)^3}$$</p>
971,997
<p>How do we find $$\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\theta}}d\theta\right]$$</p> <p>In the shortest and easiest possible manner?</p> <p>I cannot think of anything good.</p>
Random Variable
16,033
<p>The answer can be expressed in terms of the <a href="http://mathworld.wolfram.com/SineIntegral.html">sine integral</a>.</p> <p>Let $z=e^{i \theta}$.</p> <p>Then</p> <p>$$ \begin{align} \text{Re} \int_{0}^{\pi /2} e^{e^{i \theta}} \ d \theta = \text{Re} \frac{1}{i} \int_{C} \frac{e^{z}}{z} \ dz \end{align}$$ where $C$ is the portion of the unit circle in the first quadrant traversed counterclockwise.</p> <p>But since $\displaystyle \frac{e^{z}}{z}$ is analytic in a simply connected domain that contains $z=1$ and $z= i$,</p> <p>$$ \begin{align} \int_{C} \frac{e^{z}}{z} \ dz &amp;= \int_{1}^{i} \frac{e^{z}}{z} \ dz \\ &amp;= \text{Ei}(i) - \text{Ei}(1) \\ &amp;= \text{Ci}(1) + i \text{Si}(1) + \frac{i \pi}{2}- \text{Ei}(1) . \tag{1} \end{align} $$</p> <p>Therefore, $$\begin{align} \text{Re} \int_{0}^{\pi/2} e^{e^{i \theta}} \ d \theta &amp;= \text{Re} \frac{1}{i} \Big( \text{Ci}(1) + i \text{Si}(1) + \frac{i \pi}{2}- \text{Ei}(1)\Big) \\ &amp;= \text{Si}(1) + \frac{\pi}{2} \\ &amp;\approx 2.5168793972. \end{align}$$</p> <p>$(1)$ <a href="http://en.wikipedia.org/wiki/Exponential_integral#Exponential_integral_of_imaginary_argument">Exponential integral of imaginary argument</a></p>
2,706,834
<p>I have the next exercise: </p> <p>Let $E$ be a normed space, $n\in \mathbb{N}$ and $\{x_1,\dots,x_n\}\in E$ a family of independent vectors. Show that for all $\alpha_1,\dots,\alpha_n\in \mathbb{K}$ there is a linear form $T\in E'$ such that $T(x_i)=\alpha_i$, for all $i=1,\dots,n$.</p> <p>My solution:</p> <p>I used the following theorem derived from Hahn Banach's Theorem</p> <p>Theorem (Bounded linear functionals): Let $X$ be a normed space and let $x_0\neq 0$ be any element of $X$. Then there exists a bounded linear functional $T$ on $X$ such that</p> <p>$||T||=1$, $T(x_0)=||x_0||$.</p> <p>How $\{x_1,\dots,x_n\}\in E$ a family of independent vectors, then each $x_i\neq 0$, with $i=1,\dots,n$. In addition $X$ is a normed space.</p> <p>Therefore exists $T\in E'$ such that $||T||=1$, $T(x_i)=||x_i||$. Let $\alpha_i=||x_i||$. With which you have to $T(x_i)=\alpha_i$</p>
user284331
284,331
<p>Use the following version of Hahn-Banach Extension Theorem (<em>An Introduction to Banach Space Theory</em>, Robert Megginson, page 75):</p> <blockquote> <p>Suppose that $f_{0}$ is a bounded linear functional on a subspace $Y$ of a normed space $X$. Then there is a bounded linear functional $f$ on all of $X$ such that $\|f\|=\|f_{0}\|$ and the restriction of $f$ to $Y$ is $f_{0}$.</p> </blockquote> <p>Now $Y=\left&lt;x_{1},...,x_{n}\right&gt;$ is finite-dimensional subspace of $X$, any linear functional on $Y$ is bounded. Given $\alpha_{i}$, we let $f_{0}(x_{i})=\alpha_{i}$ and extends canonically on the whole $Y$.</p> <p>Applying Hahn-Banach Extension Theorem for an $f\in X^{\ast}$ such that $f\big|_{Y}=f_{0}$, then we are done.</p>
83,648
<p><a href="http://reference.wolfram.com/language/ref/FullGraphics.html" rel="nofollow noreferrer"><code>FullGraphics</code></a> hasn't worked entirely for a long time and the situation appears to be getting worse instead of better. In <em>Mathematica</em> 10.0, 10.1, 11.3, 12.3 up to 13.1 a simple usage throws numerous errors and returns a graphic without ticks and with the wrong aspect ratio:</p> <pre><code>Plot[Sin[x], {x, 0, 10}] // FullGraphics </code></pre> <blockquote> <p>Axes::axes: {{False,False},{False,False}} is not a valid axis specification. &gt;&gt;</p> <p>Ticks::ticks: {Automatic,Automatic} is not a valid tick specification. &gt;&gt;</p> <p>(* etc. etc. *)</p> </blockquote> <p>This may be caused by or related to <a href="https://mathematica.stackexchange.com/questions/68937/">More Ticks::ticks errors in AbsoluteOptions in v10</a>.</p> <p>It seems that I must go back to version 5 functionality if I want this function to work right:</p> <pre><code>&lt;&lt; Version5`Graphics` (* load old graphics subsystem *) Plot[Sin[x], {x, 0, 10}] // FullGraphics </code></pre> <p><img src="https://i.stack.imgur.com/2ldzm.png" alt="enter image description here" /></p> <p>I wonder at this point if there is any indication that <code>FullGraphics</code> and perhaps also <code>AbsoluteOptions</code> are still supported? Or has something to the contrary has been written (Wolfram blog, a developer's comment, etc.) that indicates these should be removed from the documentation now?</p> <p>With <code>FullGraphics</code> broken is there a method that can take its place for producing proper <code>Graphics</code> directives that may be further manipulated and combined, not merely vectorized outlines?</p>
Alexey Popkov
280
<h2>News about <code>FullGraphics</code> and <code>AbsoluteOptions</code></h2> <p>This CW answer is intended for collecting the ongoing information about changes in <code>FullGraphics</code> and <code>AbsoluteOptions</code> through the versions.</p> <hr /> <blockquote> <p>I wonder at this point if there is any indication that <code>FullGraphics</code> and perhaps also <code>AbsoluteOptions</code> are still supported?</p> </blockquote> <p>In version 13.0 was <a href="https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn130.html" rel="nofollow noreferrer">&quot;substantially updated implementation&quot;</a> of <code>AbsoluteOptions</code>, and later after update in version 13.1 it <a href="https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn131.html" rel="nofollow noreferrer">&quot;can resolve more 2D and 3D graphics options&quot;</a>. Detailed information is not provided in the documentation, but at least now the <a href="https://mathematica.stackexchange.com/q/18034/280">long-standing bug</a> in determining <code>PlotRange</code> via <code>AbsoluteOptions</code> is fixed. Also, the &quot;Incompatible Changes&quot; page has a <a href="https://reference.wolfram.com/language/tutorial/IncompatibleChanges.html#447496873" rel="nofollow noreferrer">notice for version 13.0</a>:</p> <blockquote> <p><code>AbsoluteOptions</code> has been reimplemented to be more accurate, and the form returned for a particular option may be different now.</p> </blockquote> <p>It is also worth mentioning that there is <a href="https://resources.wolframcloud.com/FunctionRepository/resources/GraphicsInformation" rel="nofollow noreferrer"><code>ResourceFunction[&quot;GraphicsInformation&quot;]</code></a> which allows to obtain the true absolute values for some <code>Graphics</code> options even in the versions before version 13.0.</p> <p>Concerning <code>FullGraphics</code>, <a href="https://mathematica.stackexchange.com/users/10397/rhermans">rhermans</a> <a href="https://mathematica.stackexchange.com/questions/83648/is-fullgraphics-an-abandoned-function-is-there-an-alternative/271611#comment457327_83648">reported here</a> at May 29, 2018:</p> <blockquote> <p>The developers are aware of the issue and there is an ongoing support case. CASE:3897155. – rhermans May 29, 2018 at 16:35</p> </blockquote> <p>But I don't see any improvement in behavior of <code>FullGraphics</code> up to version 13.1. Probably this bug has low priority due to low number of bug reports.</p> <p>To the above, it should be added that in version 12.3 <code>AxisObject</code> was added, which greatly facilitates the reconstruction of the plot axes based on the information that can be obtained through <code>AbsoluteOptions</code>.</p>
2,250,546
<p>I am not sure how to prove this but this is what I have so far.</p> <p>$b= ak$ for some integer $k$ and $c= bl$ for some integer $l$</p> <p>Then........</p> <p>Here is where I am stuck, any help would be appreciated.</p>
Arthur
15,500
<p>... then $c = akl$, so $c^2 = bl\cdot akl = ab\cdot kl^2$</p>
2,250,546
<p>I am not sure how to prove this but this is what I have so far.</p> <p>$b= ak$ for some integer $k$ and $c= bl$ for some integer $l$</p> <p>Then........</p> <p>Here is where I am stuck, any help would be appreciated.</p>
dxiv
291,201
<p>Hint: &nbsp;prove first $m \mid n, \,p \mid q \implies m\cdot p \mid n \cdot q\,$, then combine the following:</p> <ul> <li><p>$a \mid b \implies ab \mid b^2$</p></li> <li><p>$b \mid c \implies b^2 \mid c^2$</p></li> </ul>
2,250,546
<p>I am not sure how to prove this but this is what I have so far.</p> <p>$b= ak$ for some integer $k$ and $c= bl$ for some integer $l$</p> <p>Then........</p> <p>Here is where I am stuck, any help would be appreciated.</p>
Jaideep Khare
421,580
<p>$$b=ak \tag1$$ </p> <p>and </p> <p>$$c=bl$$ from $(1)$</p> <p>$$c=(ak)l \implies c^2=a^2k^2l^2$$</p> <p>Since $b=ak$, we can write :</p> <p>$$ c^2=a^2k^2l^2=a \cdot(ak)\cdot kl^2=ab \cdot kl^2$$</p> <p>Since $$c^2= ab \times m ~~\text{where}~~ m=kl^2 \implies ab \mid c^2$$ </p>
2,784,531
<p>Let <span class="math-container">$X = (X_1, X_2, \dots, X_n)$</span> be jointly Gaussian with mean vector <span class="math-container">$\mu$</span> and covariance matrix <span class="math-container">$\Sigma$</span>. Let <span class="math-container">$S$</span> be their sum.</p> <p>I know that the distribution of each <span class="math-container">$X_i \mid S = s$</span> is also Gaussian.</p> <p>When <span class="math-container">$n=2$</span>, I know that <span class="math-container">$$ E\left( X_1\mid S = s \right) = s \frac{\sigma_1^2}{\sigma_1^2 + \sigma_2^2} $$</span> and <span class="math-container">$$ V\left(X_1\mid S = s \right) = \frac{\sigma_1^2\sigma_2^2}{\sigma_1^2 + \sigma_2^2} $$</span> (see <a href="https://stats.stackexchange.com/a/17464">here</a> and <a href="https://stats.stackexchange.com/q/9071">here</a>). I could probably work out analogous expressions for an arbitrary <span class="math-container">$n$</span> if I sat down with a pencil and paper and worked at it for a bit.</p> <p>What I want to know is, <strong>what is the distribution of <span class="math-container">$X$</span> given <span class="math-container">$S = s$</span>?</strong></p> <p>I know that this can't be Gaussian, since the sum is bounded. It's clearly not Dirichlet or anything Dirichlet-esque, since the marginal distributions are Gaussian. But beyond that I don't have a clue.</p>
jlewk
484,640
<p>Let A be a deterministic matrix of size <span class="math-container">$n\times n$</span> and let <span class="math-container">$v$</span> be a vector of size <span class="math-container">$n$</span>. The random vector <span class="math-container">$(AX, S)$</span> is jointly normal. The idea is to construct both</p> <ol> <li>a matrix <span class="math-container">$A$</span> such that <span class="math-container">$AX$</span> is independent from <span class="math-container">$S$</span>, and</li> <li>a vector <span class="math-container">$v$</span> such that <span class="math-container">$X = AX + Sv$</span>.</li> </ol> <p>Why? Then by independence we have a crystal-clear description of the distribution of <span class="math-container">$X$</span> given <span class="math-container">$S=s$</span>: The distribution of <span class="math-container">$X$</span> given <span class="math-container">$S=s$</span> is normal <span class="math-container">$$N(sv + A\mu, A\Sigma A^T)$$</span>.</p> <p>Now let's find such <span class="math-container">$A$</span> and <span class="math-container">$v$</span>.</p> <ul> <li>Since <span class="math-container">$(AX,S)$</span> are jointly normal, <span class="math-container">$AX$</span> is independent from <span class="math-container">$S$</span> if and only if their covariance matrix is zero, that is, <span class="math-container">$E[A(X-\mu)(S - E[S])]=0$</span>. If <span class="math-container">$u=(1,...,1)\in R^n$</span>, this is equivalent to <span class="math-container">$E[A(X-\mu)(X-\mu)^T u] = A\Sigma u= 0$</span>.</li> <li>For <span class="math-container">$v$</span>, the relationship <span class="math-container">$X=AX +vS$</span> is satisfied provided that <span class="math-container">$I_n = A + vu^T$</span>, where again <span class="math-container">$u$</span> is the vector <span class="math-container">$(1,...,1)$</span>. Since <span class="math-container">$A\Sigma u =0$</span>, multypling this by <span class="math-container">$\Sigma u$</span> implies that <span class="math-container">$$v = \frac{1}{u^T\Sigma u} \Sigma u.$$</span> Now set <span class="math-container">$$A = I_n - v u^T.$$</span> One readily verifies than such choice of <span class="math-container">$A$</span> indeed satisfies <span class="math-container">$A\Sigma u = 0$</span>, and we have constructed <span class="math-container">$A$</span> and <span class="math-container">$v$</span> that satisfy the requirements.</li> </ul> <hr /> <h3>More Generally: the distribution of <span class="math-container">$X$</span> given <span class="math-container">$UX=b$</span> for some matrix <span class="math-container">$U$</span></h3> <p>If <span class="math-container">$U$</span> is a <span class="math-container">$k\times n$</span> matrix of rank <span class="math-container">$k$</span> and we would like to find the distribution of <span class="math-container">$X$</span> conditionally on <span class="math-container">$UX$</span>, the same technique can be extended.</p> <p>(In the above example, <span class="math-container">$U$</span> is an <span class="math-container">$1\times n$</span> matrix equal to <span class="math-container">$u^T$</span>.)</p> <p>We proceed similarly: we look for deterministic matrix <span class="math-container">$A$</span> and a <span class="math-container">$n\times k$</span> matrix <span class="math-container">$C$</span> such that</p> <ol> <li><span class="math-container">$AX$</span> and <span class="math-container">$UX$</span> are independent, and</li> <li><span class="math-container">$I_n = A + CU$</span> so that <span class="math-container">$X = AX + CUX$</span> always holds.</li> </ol> <p>Why? If we can find such matrices <span class="math-container">$A$</span> and <span class="math-container">$C$</span>, then the distribution of <span class="math-container">$X$</span> given <span class="math-container">$UX=b$</span> is normal <span class="math-container">$$N(A\mu + Cb, A^T\Sigma A).$$</span></p> <p>Since <span class="math-container">$AX$</span> and <span class="math-container">$UX$</span> are jointly Normal, the first condition holds if and only if <span class="math-container">$E[A(X-\mu)(U(X-\mu))^T] = A\Sigma U^T = 0$</span>. Multiyplying the second condition by <span class="math-container">$\Sigma U^T$</span>, it must be that <span class="math-container">$\Sigma U^T = C U \Sigma U^T$</span>, hence <span class="math-container">$$C = \Sigma U^T (U\Sigma U^T)^{-1} .$$</span></p> <p>Finally, define <span class="math-container">$$A = I_n - CU,$$</span> and check that this choice of <span class="math-container">$A$</span> and <span class="math-container">$C$</span> indeed satisfy <span class="math-container">$A\Sigma U^T = 0$</span> and the above requirements.</p> <p>(By the way, the matrix <span class="math-container">$U\Sigma U^T$</span> is indeed invertible whenever <span class="math-container">$U$</span> is of full rank <span class="math-container">$k&lt;n$</span> and <span class="math-container">$\Sigma$</span> is invertible. The matrix <span class="math-container">$\Sigma$</span> is invertible if and only if <span class="math-container">$X$</span> a continuous distribution in <span class="math-container">$R^n$</span> in the sens that it has a density with respect to the Lebesgue measure in <span class="math-container">$R^n$</span>.)</p>
1,072,561
<p>can you help me with this quest?</p> <p>About composition $f$ and vector space $\mathbf{V}=\mathbb{Z^4_2}$ we know the following:</p> <p>$f \circ f = id_V$,$~~f $ $ \left(\begin{array}{ccc} 1\\ 0\\ 1\\ 0\\ \end{array}\right)= \left(\begin{array}{ccc} 1\\ 1\\ 0\\ 0\\ \end{array}\right) $, $~~f $ $ \left(\begin{array}{ccc} 0\\ 1\\ 0\\ 1\\ \end{array}\right)= \left(\begin{array}{ccc} 1\\ 1\\ 0\\ 1\\ \end{array}\right) $</p> <p>$id_V$ is identity. </p> <p>Find $f~((x_1,x_2,x_3,x_4)^T).$</p> <p>I would be gratefull for any kind of advice.</p> <p>Thanks</p>
John McGee
170,591
<p>Create a matrix $M$ whose columns are the $b_{i}$, then $T(u)=Mu$ performs the transformation. It is now easy to show that $T$ has the properties of a linear transformation.</p>