qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
406,437
<p>Calculus the extreme value of the $f(x,y)=x^{2}+y^{2}+xy+\dfrac{1}{x}+\dfrac{1}{y}$</p> <p>pleasee help me.</p>
rschwieb
29,335
<p>Here's a slightly different example. If you are comfortable with doing algebra with polynomials, then I think you will find it easy to understand.</p> <p>Take any finite group $G$ (say of order $n$). Now, you can formally make a set $\{\sum_{g\in G} \alpha_g g\mid \alpha_g\in \Bbb R\}$. </p> <p>Since you know how to multiply the elements of $G$, you can extend the multiplication to this set by requiring linearity to hold. So, for example, if you have $a,b,c,d\in G$ and you want to multiply $(3a+2b)(c-5d)$, you would just distribute: $3ac-15ad+2bc-10bd$. Then $ac,ad,bc,bd$ would be multiplied to be elements of $G$, and you would wind back up in the set. Addition is just done by adding "like terms."</p> <p>With these operations, the set is called the <a href="http://en.wikipedia.org/wiki/Group_ring">group ring</a> $\Bbb R[G]$. In fact, $G$ doesn't have to be finite, but if $G$ is infinite then you need to only use <em>finite</em> sums in the set I gave above. Even more, $G$ doesn't have to be a group, it just needs to have an associatve multiplication defined on it, so it could be just a monoid or semigroup.</p> <p>A monoid can informally be described by thinking of a group which does not require the existence of inverses. So, if $x$ is an indeterminate, then $S=\{1,x,x^2,x^3\dots\}$ is a monoid, because $x^ix^j=x^{i+j}$ is an associative multiplication. The <em>monoid</em> ring $\Bbb R[S]$ for this set is something you are really familiar with: it is usualy denoted $\Bbb R [x]$ and called "the ring of polynomials over $\Bbb R$"! </p> <p>For another thing, you can use any ring you want besides $\Bbb R$! You could even use $\Bbb Z$ if you so chose, or whatever other ring you wanted.</p> <p>I'm confident that if you stretch your intuition for multiplying polynomials by replacing the powers of $x$ with elements of a group (or monoid), you will quickly grasp what a group ring is.</p> <hr> <p>Anyhow, let's get to the point. If you choose $G$ to be a group that is not Abelian (that is, there exists $g,h\in G$ such that $gh\neq hg$) then you know for sure $\Bbb R[G]$ is not abelian: because in the ring, $gh\neq hg$.</p> <hr> <p>If you would like to experiment with the smallest group which isn't commutative, you'll have to begin with the <a href="http://en.wikipedia.org/wiki/Quaternion_group">symmetries of a triangle</a> $S_3=\{1,\sigma,\sigma^2,\tau,\sigma\tau,\sigma^2\tau\}$. To review, the multiplication obeys the relations $\sigma^3=1=\tau^2$, and $\tau\sigma=\sigma^2\tau$.</p> <p>Pick two elements $p,q$ of $\Bbb Z[S_3]$. Compute $\sigma p$ and $p\sigma$. Compute $p+q$ and $p-q$ and $pq$. Have fun!</p>
4,085,635
<p>What is the volume of an n-dimension cube? Consider the length of each side to be <span class="math-container">$a$</span>. How to solve this problem?</p>
saulspatz
235,128
<p>For an intuitive explanation, imagine that the side are thing hollow metal rods, with a chain running through all of them and joining them. (The chain forms a closed loop.)</p> <p>Hold the longest rod parallel to the ground. The other rods hang down an close the polygon.</p>
209,892
<p>From the values:</p> <pre><code>{57.02, 71.04, 87.03, 97.05, 99.07, 101.05, 103.01, 113.08, 114.04, 115.03, 128.06, 128.09, 129.04, 131.04, 137.06, 147.07, 156.10, 163.03, 186.08} </code></pre> <p>I would like to find all possible combinations of 3 values that have the sum of roughly 344.25 (+/- 0.05 would be ok). I have tried:</p> <pre><code>IntegerPartitions[344.2, {3}, {57.0, 71.0, 87.0, 97.1, 99.1, 101.1, 103.0, 113.1, 114.0, 115.0, 128.1, 128.1, 129.0, 131.0, 137.1, 147.1, 156.1, 163.0, 186.1}] </code></pre> <p>though <code>IntegerPartitions</code> only seems to accept whole numbers. Any help would be appreciated.</p>
Roman
26,598
<p>Use exact fractions for <code>IntegerPartitions</code>:</p> <pre><code>L = Round[{57.02, 71.04, 87.03, 97.05, 99.07, 101.05, 103.01, 113.08, 114.04, 115.03, 128.06, 128.09, 129.04, 131.04, 137.06, 147.07, 156.10, 163.03, 186.08}, 1/100]; Join @@ Table[IntegerPartitions[i, ∞, L], {i, 344, 345, 1/100}] (* {{11503/100, 11503/100, 2851/25}, {11503/100, 11503/100, 2851/50, 2851/50}, {10301/100, 1941/20, 8703/100, 2851/50}, {3226/25, 6403/50, 8703/100}, {3226/25, 2851/25, 2021/20}, {3226/25, 2021/20, 2851/50, 2851/50}, {3226/25, 8703/100, 1776/25, 2851/50}, {6403/50, 11503/100, 2021/20}, {11503/100, 2021/20, 1776/25, 2851/50}, {11503/100, 8703/100, 1776/25, 1776/25}, {4652/25, 2021/20, 2851/50}, {4652/25, 8703/100, 1776/25}, {3276/25, 2851/25, 9907/100}, {3276/25, 9907/100, 2851/50, 2851/50}, {6403/50, 2827/25, 10301/100}, {2827/25, 10301/100, 1776/25, 2851/50}, {1561/10, 3276/25, 2851/50}, {3276/25, 1776/25, 1776/25, 1776/25}, {3226/25, 12809/100, 8703/100}, {2827/25, 8703/100, 8703/100, 2851/50}, {10301/100, 9907/100, 1776/25, 1776/25}, {12809/100, 11503/100, 2021/20}, {2021/20, 9907/100, 8703/100, 2851/50}, {9907/100, 8703/100, 8703/100, 1776/25}, {1561/10, 2021/20, 8703/100}, {12809/100, 2827/25, 10301/100}, {2021/20, 2021/20, 1776/25, 1776/25}} *) </code></pre>
2,052,826
<p>I've been working on some quantum mechanics problems and arrived to this one where I have to deal with subscripts. I got stuck doing this: I have $\epsilon_{imk}\epsilon_{ikn}=\delta_{mk}\delta_{kn}-\delta_{mn}\delta_{kk}$. But then I went to check and $\delta_{mk}\delta_{kn}-\delta_{mn}\delta_{kk}$ is equal to $\delta_{mn}-3\delta_{mn}$. Why is that so? Thank you in advance.</p>
HBR
396,575
<p>Levi Civita symbol $\epsilon_{ikl}$ is defined as it follows $$\epsilon_{ikl} = \left\{ \begin{array}{cl} 1 &amp; if\quad i\neq k\neq l\quad and \quad even \quad permutation\\ -1&amp; if\quad i\neq k\neq l\quad and \quad odd\quad permutation\\ 0 &amp; otherwise \end{array}\right.$$</p> <p>From this definition, let's start with the contraction of $\epsilon_{ijk}$ in its first index: $$\epsilon_{ikl}\epsilon_{imn}=\delta_{km}\delta_{ln}-\delta_{kn}\delta_{lm}\tag1$$</p> <p>Where $\delta_{ik}$ is the Kronecker delta (identity matrix), a symmetric isotropic tensor and it is defined as it follows $$\delta_{ik} = \left\{ \begin{array}{cl} 1 &amp; if\quad i= k\\ 0 &amp; otherwise \end{array}\right.$$</p> <p>Contracting $(1)$ once more, $\textit{i.e.}$ multiplying it by $\delta_{km}$ we have $$\delta_{km}\epsilon_{ikl}\epsilon_{imn}=\epsilon_{ikl}\epsilon_{ikn}=\delta_{km}(\delta_{km}\delta_{ln}-\delta_{kn}\delta_{lm})=\delta_{kk}\delta_{ln}-\delta_{kn}\delta_{kl}=\delta_{kk}\delta_{ln}-\delta_{ln}\tag2$$ Recall that $\delta_{lk} = \delta_{kl}$ due to symmetry properties and $\delta_{km}\delta_{km}=\delta_{kk}=\delta_{mm}$ since they are dummy indices (repeated indices indicated summation over this index).</p> <p>Now comes the term $\delta_{ii}$, this quantity is a scalar, and represents the trace of the identity matrix in a n-dimensional space, therefore $\delta_{ii}=n$ and finally $(2)$ is written as it follows $$\epsilon_{ikl}\epsilon_{ikn}=(\delta_{kk}-1)\delta_{ln}=(n-1)\delta_{ln}$$ If one contracts again, the following identity $$\epsilon_{ikl}\epsilon_{ikl}=n(n-1)$$</p> <p>In your case $n=3$.</p> <p>Hope this helps you</p>
1,781,227
<p>For a simple x and y plane (2 dimensional), to find the distance between two points we would use the formula </p> <p>$$ a^2 +b^2 = c^2 $$</p> <p>For a slightly more complicated plane; x,y and z (3 dimensional), to find the distance between two points we would use the formula</p> <p>$$ d^2 = a^2 + b^2 + c^2 $$</p> <p>My question is this; is it possible to use pythagoras's theorem to find the distance/magnitude/modulus between two points in 4 dimensions? And what format would it be in? Using graphs of triangles and cuboids, I have proved for the 2 dimensional and 3 dimensional pythagorean theorem usage, but since I do not understand the 4th dimension entirely (for convenience and understanding, I assume that it is time), I cannot picture how to start nor work this.</p> <p>Note: I am a highschool student, and new. If I have not provided enough information or something is unclear please comment and I will try to change it. Thank you in advance</p>
poetasis
546,655
<p>The Pythagorean theorem works for any dimensions >1. We can find examples by constructing n-tuples made of multiple triples. We begin by find a side <span class="math-container">$A$</span> to match the <span class="math-container">$C$</span> of any previous triangle.</p> <p>Here we substitute and solve <span class="math-container">$C=A=m^2-n^2\implies n=\sqrt{m^2-C}$</span> where <span class="math-container">$$\lceil\sqrt{C+1}\space\rceil\le m\le \bigl\lceil\frac{C}{2}\bigr\rceil$$</span></p> <p>If any <span class="math-container">$m$</span> yields a positive integer <span class="math-container">$n$</span>, we have <span class="math-container">$(m,n)$</span> for a Pythagorean triple.</p> <p>The simplest triple is <span class="math-container">$3,4,5$</span> and we let <span class="math-container">$A=\sqrt{m^2-5}$</span> where; <span class="math-container">$$m_{min}=\lceil\sqrt{5+1}\space\rceil=3\quad M_{max}=\lceil\frac{5}{2}\space\rceil=3$$</span> This one was easy: <span class="math-container">$n=\sqrt{m^2-C}=\sqrt{3^2-5}=2\quad f(3,2)=(5,12,13)$</span></p> <p>Now we have <span class="math-container">$3^2+4^2+12^2=13^2$</span> and the logic is that, since <span class="math-container">$5^2=3^2+4^2$</span>, we can substitute <span class="math-container">$\sqrt{3^2+4^2}$</span> for the <span class="math-container">$5$</span> leg in <span class="math-container">$5,12,13$</span>.</p> <p>Let's try <span class="math-container">$(21,20,29)$</span>. We have <span class="math-container">$C=29\implies 6\le m \le 15$</span> and we find <span class="math-container">$only$</span> <span class="math-container">$f(15,14)=(29,420,421)$</span></p> <p>If we continue the process begun with <span class="math-container">$(3,4,5)$</span> we can find <span class="math-container">$(3,4,5)\rightarrow(5,12,13)\rightarrow(13,84,85)\rightarrow(85,132,157)\rightarrow(157,12324,12325)\text{ and so on.}$</span></p> <p>So far, we have <span class="math-container">$3^2+4^2+12^2+84^2+132^2+12324^2=12325^2$</span></p>
4,063,337
<p>In an exercise I'm asked the following:</p> <blockquote> <p>a) Find a formula for <span class="math-container">$\int (1-x^2)^n dx$</span>, for any <span class="math-container">$n \in \mathbb N$</span>.</p> <p>b) Prove that, for all <span class="math-container">$n \in \mathbb N$</span>: <span class="math-container">$$\int_0^1(1-x^2)^n dx = \frac{2^{2n}(n!)^2}{(2n + 1)!}$$</span></p> </blockquote> <p>I used the binomial theorem in <span class="math-container">$a$</span> and got:</p> <p><span class="math-container">$$\int (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) (-1)^k \ \frac{x^{2k + 1}}{2k+1} \ \ \ + \ \ C$$</span></p> <p>and so in part (b) i got:</p> <p><span class="math-container">$$\int_0^1 (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) \ \frac{(-1)^k}{2k+1}$$</span></p> <p>I have no clue on how to arrive at the expression that I'm supposed to arrive. How can I solve this?</p>
J.G.
56,861
<p>Evaluate <span class="math-container">$f_n(x):=\int_0^x(1-t^2)^ndt$</span> with integration by parts viz. <span class="math-container">$u=(1-t^2)^n,\,v=t$</span> so<span class="math-container">$$f_n=x(1-x^2)^n+2n\int_0^xt^2(1-t^2)^{n-1}dt=x(1-x^2)^n+2n(f_{n-1}-f_{n+1}).$$</span>Definite integration on <span class="math-container">$[0,\,1]$</span> simplifies this to <span class="math-container">$f_{n+1}=f_{n-1}-f_n/(2n)$</span>, making b) tractable by induction.</p>
2,468,155
<p>This problem is from Challenge and Thrill of Pre-College Mathematics: Prove that $$ (a^3+b^3)^2\le (a^2+b^2)(a^4+b^4)$$</p> <p>It would be really great if somebody could come up with a solution to this problem.</p>
Donald Splutterwit
404,247
<p>\begin{eqnarray*} \color{blue}{a^2b^2(a-b)^2} \geq 0 \\ \color{red}{a^6} +\color{blue}{a^4b^2+a^2 b^4} +\color{red}{b^6} \geq \color{red}{a^6} +\color{blue}{2 a^3b^3} +\color{red}{b^6} \\ (a^2+b^2)(a^4+b^4) \geq (a^3+b^3)^2. \end{eqnarray*}</p>
626,095
<blockquote> <p>$R = \mathbb{F}_3[x]/\langle X^3+X^2+1\rangle$ and $\alpha=[X]$ in $R$. How do you prove that the group $R^*$ is not cyclic?</p> </blockquote> <p>We have shown that $\alpha$ is a unit in $R$ with order $8$ and that $\alpha^4$ and $-\alpha^4$ are two different elements in $R^*$ both with order 2.</p>
rschwieb
29,335
<p>Hint:</p> <p>$x^3+x^2+1$ factors into the irreducibles $(x-1)$ and $(x^2+2x+2)$ over $\Bbb F_3$.</p> <p>By the Chinese Remainder Theorem, $R\cong \frac{\Bbb F_3[x]}{(x-1)}\times \frac{\Bbb F_3[x]}{(x^2+2x+2)}$.</p> <p>Can you see what the two pieces look like, and how you can use this to see the units of $R$?</p>
4,646,650
<p>I believe there will be values of <span class="math-container">$x$</span> for which the inequality <span class="math-container">$x^3 - 2x + 2 \ge 3 - x^2$</span> is true and values for which it is not true, because:</p> <ul> <li><em>LHS asymptotically increases but RHS decreases for increasingly positive values of <span class="math-container">$x$</span></em></li> <li><em>LHS asymptotically decreases faster than RHS for increasingly negative values of <span class="math-container">$x$</span></em></li> </ul> <p>But I don't know how to reason much further about <span class="math-container">$x^3 - 2x + 2 \ge 3 - x^2$</span>.<br></p> <p>My idea is that if I get the inequality in a form I can reason about, I can determine what values of <span class="math-container">$x$</span> will satisfy the condition. I can reason about</p> <p><span class="math-container">$$\frac{a}{b} &gt; 0$$</span></p> <p>since either <span class="math-container">$a,b &gt; 0$</span> or <span class="math-container">$a,b &lt; 0$</span> for <span class="math-container">$\frac{a}{b} &gt; 0$</span>.</p> <p>I can also reason about the signs of <span class="math-container">$a,b$</span> if they are expressed as a product of factors, since any even multiple of negative factors gives a positive (or zero) product. I think this will put me in a good spot to reason about what conditions must be met for the signs to satisfy the inequality <span class="math-container">$\frac{a}{b} &gt; 0$</span> (although in my case, I am including zero).</p> <p>Thus, my plan is to transform the inequality into the form LHS = <code>a quotient of factorised terms</code> and RHS = 0. But I'm not sure how.</p>
Rehman
1,136,062
<p><span class="math-container">$$x^3+x^2-2x-1 \ge 0$$</span></p> <hr /> <p>First, let's solve the equation: <span class="math-container">$$x^3+x^2-2x-1=0$$</span> Let's say <span class="math-container">$x=y-\frac{1}{3}$</span> : <span class="math-container">$$y^3-\frac{7}{3}y-\frac{7}{27}=0$$</span> We know that the: <span class="math-container">$(m+n)^3-3mn(m+n)-(m^3+n^3)=0$</span>. Based on this, let's say <span class="math-container">$y=m+n$</span>. Now let's calculate the following system of equations: <span class="math-container">$$\begin{cases} 3mn=\frac{7}{3} \\ m^3+n^3=\frac{7}{27} \end{cases}$$</span> To solve this system of equations, it is necessary to solve the equation below: <span class="math-container">$$m^3+\left(\frac{7}{9m}\right)^3=\frac{7}{27}$$</span> <span class="math-container">$$\frac{729m^6-189m^3+343}{729m^3}=0$$</span> Substitute <span class="math-container">$t$</span> for <span class="math-container">$m^3$</span> <span class="math-container">$$729t^2-189t+343=0$$</span> From here: <span class="math-container">$$t=\frac{7\sqrt{3} i}{18}+\frac{7}{54}$$</span> <span class="math-container">$$t=-\frac{7\sqrt{3} i}{18}+\frac{7}{54}$$</span> Since <span class="math-container">$m=t^3$</span> ,the solutions are obtained by solving the equation for each <span class="math-container">$t$</span> and variable <span class="math-container">$m \neq 0$</span>:<br /> <span class="math-container">$$m=\frac{{\sqrt{7}e}^\frac{-\arctan(3\sqrt{3})i+2πi}{3}}{3}$$</span> <span class="math-container">$$m=\frac{{\sqrt{7}e}^\frac{-\arctan(3\sqrt{3})i+4πi}{3}}{3}$$</span> <span class="math-container">$$m=\frac{{\sqrt{7}e}^\frac{\arctan(3\sqrt{3})i}{3}}{3}$$</span> <span class="math-container">$$m=\frac{{\sqrt{7}e}^\frac{-\arctan(3\sqrt{3})i}{3}}{3}$$</span> <span class="math-container">$$m=\frac{{\sqrt{7}e}^\frac{\arctan(3\sqrt{3})i+2πi}{3}}{3}$$</span> <span class="math-container">$$m=\frac{{\sqrt{7}e}^\frac{\arctan(3\sqrt{3})i+4πi}{3}}{3}$$</span> We can find <span class="math-container">$n$</span> using <span class="math-container">$m$</span>, <span class="math-container">$y$</span> using <span class="math-container">$m$</span> and <span class="math-container">$n$</span>, and <span class="math-container">$x$</span> using <span class="math-container">$y$</span>:</p> <p><span class="math-container">$$x_1=\frac{ -\sqrt{7}cos\left( \frac{arccos\left( \frac{\sqrt{7}}{14} \right) }{3} \right) -\sqrt{21}sin\left( \frac{arccos\left( \frac{\sqrt{7}}{14} \right) }{3} \right)-1}{3} \approx −1.801937736$$</span></p> <p><span class="math-container">$$x_2=\frac{ 2\sqrt{7}cos\left( \frac{arccos\left( \frac{\sqrt{7}}{14} \right) }{3} \right) -1}{3} \approx 1.246979604$$</span></p> <p><span class="math-container">$$x_3=\frac{ \sqrt{21}sin\left( \frac{arccos\left( \frac{\sqrt{7}}{14} \right) }{3} \right) -\sqrt{7}cos\left( \frac{arccos\left( \frac{\sqrt{7}}{14} \right) }{3} \right)-1}{3} \approx −0.445041868$$</span> So we have to solve the following inequality: <span class="math-container">$$(x-x_1)(x-x_2)(x-x_3) \ge 0$$</span></p> <hr /> <p><span class="math-container">$$(x+1,8)(x-1,25)(x+0,45)\ge 0$$</span> <span class="math-container">$$x \in \left[-\frac{9}{5},-\frac{9}{20}\right] \cup \left[\frac{5}{4},+∞\right)$$</span></p>
151,076
<p>If A and B are partially-ordered-sets, such that there are injective order-preserving maps from A to B and from B to A, is there necessarily an order-preserving bijection between A and B ?</p>
Asaf Karagila
622
<p>One does not have to go to partially ordered sets for a counterexample. Linear orders are enough: Consider the rationals in the open interval $(0,1)$ and the rationals in the closed interval $[0,1]$. </p> <p>The two obviously embed into one another but are not isomorphic due to minimum/maximum considerations. </p> <hr> <p>My example shows that linear orders need not have this property; and Brian's example shows that well-founded partially ordered sets need not have this property.</p> <p>However if we consider the intersection, well-founded linear orders (i.e. well orders) then indeed this is true. Namely, if $(A,&lt;)$ and $(B,\prec)$ are two well ordered sets and $(A,&lt;)$ embeds into $(B,\prec)$ and vice versa then there exists an isomorphism.</p>
151,076
<p>If A and B are partially-ordered-sets, such that there are injective order-preserving maps from A to B and from B to A, is there necessarily an order-preserving bijection between A and B ?</p>
Community
-1
<p>Here's an example with <em>scattered</em> countable linear orderings: a set of order type $\omega^*\omega$ and a set of order type $1+\omega^*\omega$.</p>
1,844,894
<p>To explain my question, here is an example.</p> <p>Below is an AP:</p> <p>2, 6, 10, 14....n</p> <p>Calculating the nth term in this sequence is easy because we have a formula. The common difference (d = 4) in AP is constant and that's why the formula is applicable, I think.</p> <p>But what about this sequence:</p> <p>5, 12, 21, 32....n</p> <p>Here, the difference between two consecutive elements is not constant, but it too has a pattern which all of you may have guessed. Taking the differences between its consecutive elements and formimg a sequence results in an AP. For the above example, the AP looks like this:</p> <p>5, 7, 9, 11.....n</p> <p>So given a sequence with "uniformly varying common difference" , is there any formula to calculate the nth term of this sequence?</p>
Archis Welankar
275,884
<p>Note we can develop a combinatorial argument which gives total number of ways as ${n\choose r}$ where r is total number of summations</p>
1,511,733
<p>B = matrix given below. I is identity matrix.</p> <pre><code> [1 2 3 4] [3 2 4 3] [1 3 2 4] [5 4 3 7] </code></pre> <p>So What will be the relation between the matrices A and C if AB = I and BC = I?</p> <p>I think that A = C because both AB and BC have B in common and both of their product is an identity matrix but I'm not sure. </p>
Yiorgos S. Smyrlis
57,021
<p>If $AB=I$, then $B$ is invertible and possesses a unique inverse, and clearly, this can only be the matrix $A$. Hence $$ AB=I\qquad\Longrightarrow\qquad AB=BA=I. $$ Likewise, if $BC=I$, then $C$ is the inverse of $B$, the <em>unique inverse</em>, and therefore $A=C$.</p>
335,483
<p>Let $N$ be a set of non-negative integers. Of course we know that $a+b=0$ implies that $a=b=0$ for $a, b \in N$.</p> <p>How do (or can) we prove this fact if we don't know the subtraction or order?</p> <p>In other words, we can only use the addition and multiplication.</p> <p>Please give me advise.</p> <p>EDIT</p> <p>The addition law mean that for $a, b \in N$, there is an element $a+b$ in $N$ and this operation is associative. The multiplication law means that for $a, b \in N$, there is an element $ab$ in $N$ and this operation is associative. Also the distribution laws hold.</p> <hr> <p>EDIT2</p> <p>Let me rephrase the question since I don't want arguments on orders.</p> <p>Let $N$ be a set with operation $+$ and $\times$.</p> <p>$N$ is a monoid with the operation $+$ and $\times$ respectively. There is an unit element $0\in N$.</p> <p>The distribution laws hold as in the case of the set of integers.</p> <p>Can we prove the fact above with this assumption?</p>
Doug Spoonwood
11,300
<p>You don't need $+$ or $\times$ as forming monoids on the natural numbers, nor the distributive property. You can get by with less as follows:</p> <p>Addition has the property that $(0+b)\leq(a+b)$ and $(a+0)\leq(a+b)$. This is NOT presupposing the order of the natural numbers, or that no number of the naturals has 0 as a successor, but only a monotonicity property of addition.</p> <p>We'll also assume the existence of an additive identity.</p> <p>Consequently, if $(a+b)=0$, then $b=(0+b)\leq(a+b)=0.$ So, $b\leq 0.$ Also, $0=(0+0)\leq (0+b)=b$ by substitution of 0 for a in $(a+0)\leq (a+b)$ and the identity rule. Thus, $0\leq b.$ So, $0 \leq b\leq 0.$ Consequently, $b=0.$</p> <p>Similarly, $a=(a+0)\leq(a+b)=0.$ Thus, $a\leq 0$. Also, $0=(0+0)\leq(a+0)=a.$ So, $0\leq a.$ Thus, $0 \leq a\leq 0.$ Consequently, $a=0.$</p> <p>Therefore, for an algebraic structure with an identity element "$0$", binary operation "$+$" and where $(0+b)\leq (a+b),$ and $(a+0)\leq (a+b),$ and $a \leq x \leq a \implies x=a,$ it holds that "if $(a+b)=0,$ then $a=b=0$".</p> <p>As an example of an algebriac structure where this holds, and "$+$" is not natural number addition, let "$+$" denote the maximum of two numbers, and consider $(\{0, 1\}, +)$. Both suppositions used in the proof above can get verified.</p>
380,530
<p>It's easy to show that there's a function such that $\int_1^\infty f $ diverges, but $\int_1^\infty |f|$ converges, such as $f = 1/(-1+x)$. </p> <p>But is there a function such that $\int_1^\infty f $ converges, but $\int_1^\infty |f| $ diverges?</p>
Clement C.
75,808
<p>What about $x\mapsto\frac{\sin(x-1)}{x-1}$? (by definition of what you ask, it cannot Lebesgue integrable, but $\int_{0}^{\uparrow\infty} \frac{\sin x}{x} dx$ does converge to $\frac{\pi}{2}$; and I 'm almost sure to recall that the integral of its absolute value diverges).</p>
380,530
<p>It's easy to show that there's a function such that $\int_1^\infty f $ diverges, but $\int_1^\infty |f|$ converges, such as $f = 1/(-1+x)$. </p> <p>But is there a function such that $\int_1^\infty f $ converges, but $\int_1^\infty |f| $ diverges?</p>
Mark McClure
21,361
<p>Sami is correct. Here are some more details.</p> <p>$$\left|\int_a^{\infty} f(x) \, dx\right| \leq \int_a^{\infty} \left|f(x)\right| \, dx.$$</p> <p>Thus, if the second integral converges then certainly the first does as well.</p> <p>Next, it's not too hard to show that </p> <p>$$\int_0^{\infty} \frac{\sin(x)}{x} \, dx = \sum_{n=0}^{\infty} \int_{n\pi}^{(n+1)\pi} \frac{\sin(x)}{x} \, dx$$</p> <p>and that this last series converges by the alternating series test.</p> <p>Finally, note that</p> <p>$$\int_{\frac{2n+1}{2}\pi-\frac{\pi}{4}}^{\frac{2n+1}{2}\pi+\frac{\pi}{4}} \left|\frac{\sin(x)}{x}\right|\,dx &gt; \frac{\pi}{2}\frac{\sqrt{2}}{2}\frac{1}{\frac{2n+1}{2}\pi-\frac{\pi}{4}} &gt; \frac{1}{4n}.$$</p> <p>This is simply a lower bound for the function on the interval times the length of the interval. It follows that</p> <p>$$\int_0^{\infty} \frac{\sin(x)}{x} \, dx$$</p> <p>is comparable to</p> <p>$$\sum_{n=1}^{\infty} \frac{1}{4n}$$</p> <p>which is a divergent series.</p>
3,155,463
<blockquote> <p><span class="math-container">$$ \lim _{n \rightarrow \infty}\left[n-\frac{n}{e}\left(1+\frac{1}{n}\right)^{n}\right] \text { equals }\_\_\_\_ $$</span></p> </blockquote> <p>I tried to expand the term in power using binomial theorem but still could not obtain the limit. </p>
TheSilverDoe
594,484
<p>One has <span class="math-container">$$n - \frac{n}{e} \left( 1 + \frac{1}{n}\right)^n = n - \frac{n}{e} \exp \left( n \ln \left( 1 + \frac{1}{n}\right)\right) =n - \frac{n}{e} \exp \left( n \left(\frac{1}{n}- \frac{1}{2n^2} + o \left( \frac{1}{n^2}\right)\right)\right) $$</span></p> <p>so <span class="math-container">$$n - \frac{n}{e} \left( 1 + \frac{1}{n}\right)^n = n - \frac{n}{e} \exp \left( 1 -\frac{1}{2n} + o \left( \frac{1}{n}\right)\right) = n - n \left( 1 - \frac{1}{2n} + o \left( \frac{1}{n}\right)\right) $$</span></p> <p>so <span class="math-container">$$n - \frac{n}{e} \left( 1 + \frac{1}{n}\right)^n = \frac{1}{2} + o(1).$$</span></p> <p>Therefore the limit is <span class="math-container">$$\frac{1}{2}$$</span></p>
1,485,327
<p>Show that $f(x,y)$ defined by:</p> <p>$$f(x,y) = \begin{cases}\dfrac{x^2y^2}{\sqrt{x^2+y^2}}&amp;\text{ if }(x,y)\not =(0,0)\\0 &amp;\text{ if }(x,y)=(0,0)\end{cases}$$</p> <p>is differentiable at $(x,y) = (0,0)$</p> <p>I tried to solve this problem by applying the theorem that if partial derivatives are continuous then the function is differentiable. Therefore, I calculated partial derivated but not I am stuck in showing they are indeed continuous. Help me!</p>
mathcounterexamples.net
187,663
<p>You have $$x^2+y^2-2 \vert xy \vert=(\vert x \vert - \vert y \vert)^2 \ge 0$$ Hence $$\vert xy \vert \le \frac{x^2+y^2}{2}$$ and $$0 \le \frac{\vert f(x,y) \vert}{\sqrt{x^2+y^2}} = \frac{x^2y^2}{x^2+y^2} \le \frac{1}{4}(x^2+y^2)$$ As $\lim_{(x,y) \to (0,0)} x^2+y^2 = 0$, this proves that $f$ is differentiable at $(0,0)$ and that its Fréchet derivative is equal to $0$. Which means that $f_x(0,0)=f_y(0,0)=0$.</p>
1,485,327
<p>Show that $f(x,y)$ defined by:</p> <p>$$f(x,y) = \begin{cases}\dfrac{x^2y^2}{\sqrt{x^2+y^2}}&amp;\text{ if }(x,y)\not =(0,0)\\0 &amp;\text{ if }(x,y)=(0,0)\end{cases}$$</p> <p>is differentiable at $(x,y) = (0,0)$</p> <p>I tried to solve this problem by applying the theorem that if partial derivatives are continuous then the function is differentiable. Therefore, I calculated partial derivated but not I am stuck in showing they are indeed continuous. Help me!</p>
Mercy King
23,304
<p>For every non-zero $h=(h_1,h_2)\in \mathbb{R}^2$ we have: $$ |f(h)-f(0)|=||f(h)|=\frac{h_1^2h_2^2}{\|h\|_2}\le \frac{\|h\|_2^4}{\|h\|_2}=\|h\|_2^3, $$ and therefore $$ \lim_{\|h\|_2\to0}\frac{|f(h)-f(0)|}{\|h\|_2}=0. $$ This shows that $f$ is differentiable at $(0,0)$, and $Df(0)\equiv 0$.</p>
2,784,697
<p>Find the solutions to:$\displaystyle\frac{d^2y}{dx^2}=\left(\frac{dy}{dx}\right)^2$. </p> <p>I got the following solutions:-</p> <p>$\left(\frac{dy}{dx}\right)=0\Rightarrow y=c_1$ is a solution</p> <p>$\left(\frac{dy}{dx}\right)=1\Rightarrow y=x+c_2$ is another solution </p> <p>Are there any other solutions?</p> <p>I dont have any idea about how to solve a $2^{nd}$ order non linear DE. As far as i Know , a $2^{nd}$ order linear DE could be solved with the help of auxillary equations , Is there any such similar methods applicable to this problem </p>
Dr. Sonnhard Graubner
175,066
<p>Substitute $$\frac{dy(x)}{dx}=v(x)$$ and then you will get $$\frac{\frac{dv(x)}{dx}}{v(x)^2}=1$$</p>
2,784,697
<p>Find the solutions to:$\displaystyle\frac{d^2y}{dx^2}=\left(\frac{dy}{dx}\right)^2$. </p> <p>I got the following solutions:-</p> <p>$\left(\frac{dy}{dx}\right)=0\Rightarrow y=c_1$ is a solution</p> <p>$\left(\frac{dy}{dx}\right)=1\Rightarrow y=x+c_2$ is another solution </p> <p>Are there any other solutions?</p> <p>I dont have any idea about how to solve a $2^{nd}$ order non linear DE. As far as i Know , a $2^{nd}$ order linear DE could be solved with the help of auxillary equations , Is there any such similar methods applicable to this problem </p>
Dylan
135,643
<p>Notice there is no 0th order derivative here. Hence, this is actually just a first-order equation in disguise. Substitute $v = \frac{dy}{dx}$ $$ \frac{dv}{dx} = v^2 $$</p> <p>Separate this and solve</p> <p>$$ v(x) = \frac{1}{c_1-x} $$</p> <p>Then integrate back</p> <p>$$ y(x) = c_2 -\ln|c_1-x| $$</p>
1,170,602
<p>How to evaluate the integral </p> <p>$$\int \sqrt{\sec x} \, dx$$</p> <p>I read that its not defined.<br> But why is it so ? Does it contradict some basic rules ? Please clarify it .</p>
Tryss
216,059
<p>$$\int_{a}^b \frac{1}{\sqrt{\cos(x)}}dx$$</p> <p>is defined if $]a,b[\subset ]-\frac{\pi}{2}+2k\pi, \frac{\pi}{2}+2k\pi[$. But you can't calculate it with the usual functions, you'll need "special" functions :</p> <p><a href="http://en.wikipedia.org/wiki/Elliptic_integral#Incomplete_elliptic_integral_of_the_first_kind" rel="nofollow">http://en.wikipedia.org/wiki/Elliptic_integral#Incomplete_elliptic_integral_of_the_first_kind</a></p>
1,502,309
<p>The initial notation is:</p> <p>$$\sum_{n=5}^\infty \frac{8}{n^2 -1}$$</p> <p>I get to about here then I get confused.</p> <p>$$\left(1-\frac{3}{2}\right)+\left(\frac{4}{5}-\frac{4}{7}\right)+...+\left(\frac{4}{n-3}-\frac{4}{n-1}\right)+...$$</p> <p>How do you figure out how to get the $\frac{1}{n-3}-\frac{1}{n-1}$ and so on? Like where does the $n-3$ come from or the $n-1$.</p>
jameselmore
86,570
<p>Looking a little closer at the question, he is asking about partial fraction decomposition, as opposed to the value of the sum itself. For this particular example, it's fairly straight forward.</p> <p>When given a fraction which contains a polynomial denominator, you can factor this fraction and break it into a sum of other fractions with denominators of a lower polynomial order.</p> <p>For example, we begin be identifying that: $$\frac{8}{n^2 - 1} = \frac{8}{(n+1)(n-1)}$$ We are then interested in finding factors $A,\ B$ such that $$\frac{8}{n^2 - 1} = \frac A{n+1} + \frac B{n-1}$$ All we need to do now is combine the right side, and solve for the necessary $A,\ B$, to equate to the right side.</p> <p>$$\frac A{n+1} + \frac B{n-1} = \frac{A(n-1) + B(n+1)}{(n+1)(n-1)} = \frac{(A+B)n + B - A}{(n+1)(n-1)}$$</p> <p>Since your original fraction has $8$ in the numerator, we make the following comparison: $$(A+B)n + B - A = 8$$ To conclude that $$A+B = 0;\ \ \ B - A = 8\implies -A = B = 4$$</p> <p>Hopefully this gets you past where you are stuck, and the supplementary answers can take you the rest of the way!</p> <p>EDIT:<br> The $(n-1)^{-1}$ and $(n-3)^{-1}$ terms arise as part of the series. When considering this series as it pushes on to infinity, the observation of $(n-1)^{-1}$ and $(n-3)^{-1}$ is no different from looking at $(n+1)^{-1}$ and $(n-1)^{-1}$. What is important is the difference between them ($2$ indices).</p>
3,501,052
<p>I want to find the number of real roots of the polynomial <span class="math-container">$x^3+7x^2+6x+5$</span>. Using Descartes rule, this polynomial has either only one real root or 3 real roots (all are negetive). How will we conclude one answer without doing some long process?</p>
2'5 9'2
11,123
<p>The first derivative is <span class="math-container">$f'(x)=3x^2+14x+6$</span>. We observe that this is negative when <span class="math-container">$x=-1$</span>: <span class="math-container">$f'(-1)=-5$</span>. From this we consider the tangent line <span class="math-container">$y=-5(x+1)+5$</span>.</p> <p>There is another tangent line where the graph crosses the <span class="math-container">$y$</span>-axis: <span class="math-container">$y=6x+5$</span>.</p> <p>We can look at the second derivative to see that the curve's inflection point happens when <span class="math-container">$x=-\frac{7}{3}$</span>. The value doesn't matter, just note this is to the left of <span class="math-container">$-1$</span>.</p> <p>After solving the system of equations from the two lines, we can find they cross at a point whose <span class="math-container">$y$</span>-value is <span class="math-container">$\frac{11}{25}$</span>, which is positive. It follows that the entirety of the curve to the right of the inflection point is positive. And it follows from <em>that</em> that there can only be one real root.</p> <p><a href="https://i.stack.imgur.com/YWJ18.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YWJ18.png" alt="enter image description here"></a></p>
646,010
<p>So i kinda think i have figured this out, i'm not very good at math, and need a formula to figure out some stats for a game i'm playing.</p> <p>I have a Weapon with a reload speed of X sec.. however, i also have a modifier attached, that will make the weapon reload faster by +Y%</p> <p>i made this formula, mostly by guessing, as i got no clue what i am doing.</p> <pre><code>100/(100+Y)*X </code></pre> <p>the results i am getting looks right to me, but is the formula ok?</p>
Mhenni Benghorbal
35,472
<p>To see it (convergence), make the change of variables <span class="math-container">$t=\ln(x)$</span> which gives</p> <blockquote> <p><span class="math-container">$$ \int _{-\infty }^{\infty }\!{t}^{2}\sin \left( {{\rm e}^{2\,t}} \right) {{\rm e}^{t}}{dt}.$$</span></p> </blockquote>
646,010
<p>So i kinda think i have figured this out, i'm not very good at math, and need a formula to figure out some stats for a game i'm playing.</p> <p>I have a Weapon with a reload speed of X sec.. however, i also have a modifier attached, that will make the weapon reload faster by +Y%</p> <p>i made this formula, mostly by guessing, as i got no clue what i am doing.</p> <pre><code>100/(100+Y)*X </code></pre> <p>the results i am getting looks right to me, but is the formula ok?</p>
Felix Marin
85,343
<p>$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}% \newcommand{\dd}{{\rm d}}% \newcommand{\down}{\downarrow}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\fermi}{\,{\rm f}}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\half}{{1 \over 2}}% \newcommand{\ic}{{\rm i}}% \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\ol}[1]{\overline{#1}}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ \begin{align} &amp;\color{#00f}{\large\int_{0}^{\infty}\ln^{2}\pars{x}\sin\pars{x^{2}}\,\dd x} =\int_{0}^{\infty}\ln^{2}\pars{x^{1/2}}\sin\pars{x}\,\half\,x^{-1/2}\dd x \\[3mm]&amp;={1 \over 8}\int_{0}^{\infty}x^{-1/2}\ln^{2}\pars{x}\sin\pars{x}\,\dd x \\[3mm]&amp;={1 \over 8}\lim_{\mu \to -1/2}\partiald[2]{}{\mu} \Im\int_{0}^{\infty}x^{\mu}\expo{\ic x}\,\dd x = {1 \over 8}\lim_{\mu \to -1/2}\partiald[2]{}{\mu} \Im\int_{0}^{\infty}\pars{\ic x}^{\mu}\expo{-x}\ic\,\dd x \\[3mm]&amp;= {1 \over 8}\lim_{\mu \to -1/2}\partiald[2]{}{\mu} \Re\bracks{\expo{\ic\pi\mu/2}\int_{0}^{\infty}x^{\mu}\expo{-x}\,\dd x} ={1 \over 8}\lim_{\mu \to -1/2}\partiald[2]{}{\mu} \Re\bracks{\expo{\ic\pi\mu/2}\Gamma\pars{\mu + 1}} \\[3mm]&amp;=\color{#00f}{\large{\root{2\pi} \over 64}\bracks{\pi + 2\Psi\pars{1/2}}^{2}} \approx 0.0241614 \end{align} Notice that $\Psi\pars{1/2} = -\gamma - 2\ln\pars{2}$ such that an equivalent result is $$ \color{#00f}{\large{1 \over 32}\,\root{\pi \over 2}\bracks{2\gamma - \pi + \ln\pars{16}}^{2}} $$ which is the usual result of symbolic software.</p>
646,010
<p>So i kinda think i have figured this out, i'm not very good at math, and need a formula to figure out some stats for a game i'm playing.</p> <p>I have a Weapon with a reload speed of X sec.. however, i also have a modifier attached, that will make the weapon reload faster by +Y%</p> <p>i made this formula, mostly by guessing, as i got no clue what i am doing.</p> <pre><code>100/(100+Y)*X </code></pre> <p>the results i am getting looks right to me, but is the formula ok?</p>
André Nicolas
6,312
<p>There is no real difficulty at $0$, since near $0$ the function $\sin(x^2)$ behaves like $x^2$, so $\lim_{x\to 0^+}\ln^2 x\sin(x^2)=0$. So we examine $$\int_1^B (\ln^2 x)( \sin(x^2))\,dx.\tag{1}$$ Rewrite as $$\int_1^B \frac{\ln^2 x}{2x} 2x \sin(x^2)\,dx,$$ and use integration by parts, letting $u=\frac{\ln^2 x}{2x}$ and $dv=2x\sin(x^2)\,dx$. Then $du=\frac{2\ln x-\ln^2 x}{2x^2}\,dx$ and we can take $v=-\cos(x^2)$. Thus our integral (1) is $$\left.\left(-\frac{\ln^2 x}{2x}\cos(x^2)\right)\right|_1^B +\int_1^B \frac{2\ln x-\ln^2 x}{2x^2}\cos(x^2)\,dx.$$ The first part gives no problem, indeed it vanishes as $B\to\infty$. The remaining integral has a (finite) limit as $B\to\infty$, because $\cos(x^2)$ is bounded and the $2x^2$ in tthe denominator crushes the $\ln$ terms in the numerator. </p> <p>It follows that our original integral converges. </p>
307,529
<p>I am trying to prove that if $L/K$ is an algebraic extension and if $\alpha \in L$, then </p> <ul> <li><p>$\alpha$ is separable over $K$ if $\mathrm{char}(K)=0$. This is clear because $K$ is perfect which in turn implies that $L/K$ is seperable . </p></li> <li><p>Now if $\mathrm{char}(K)=p$ is prime, then the statement is: $\alpha$ is separable if and only if $K(\alpha) =K(\alpha^p)$. This problem is from a past examination and looks very interesting, but somehow I am not able to connect what's happening with the extension of $K$ with $\alpha$ and $\alpha^p$ .</p></li> </ul> <p>I need some help here. </p> <p>This somehow makes me think that the minimal polynomial of $\alpha$ and $\alpha^p$ have different roots and $\alpha$ is given by some root of minimal polynomial of $\alpha^p$ . </p> <p>Note that I am using the definition of separability as follows : </p> <blockquote> <p>An extension $L/K$ is called separable if for every $\alpha \in L$ the minimal polynomial of $\alpha$ has distinct roots in $L$ . </p> </blockquote> <p>Thanks for helping. </p>
Martin Brandenburg
1,650
<p>Hint: If $\alpha \in L$, then there is some $n \in \mathbb{N}$ such that $\alpha^{p^n}$ is separable.</p>
941,632
<p>Is the Set $$S=\{e^{2x},e^{3x}\}$$ linearly independent?? And answer says Linearly independent over any interval $(a,b)$,only when $0$ doesnot belong to $(a,b)$</p> <p>How do I proceed??</p> <p>Thanks for the help!!</p>
Timbuc
118,527
<p>Suppose $\;r,s\in\Bbb R\;$ are such that</p> <p>$$re^{2x}+se^{3x}=0\;\;,\;\;\forall\,x\in (a,b)\implies e^{2x}(r+se^x)=0$$</p> <p>since $\;e^t\neq0\;\;\forall\,t\in\Bbb R\;$ , we get that</p> <p>$$r+se^x=0\iff e^x=-\frac rs\;\;\forall\;x\in (a,b)\in\Bbb R$$</p> <p>and since the exponential is not a constant function on any non-trivial interval $\;(a,b)\;$ we deduce that it must $\;r=s=0\;$</p>
3,492,856
<p><a href="https://i.stack.imgur.com/FHCP2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHCP2.jpg" alt="This is the question"></a> My solution:-</p> <p>Since <span class="math-container">$OA=AB$</span>, let us find OA first. <span class="math-container">$OA=\sqrt{(18-0)^2+(3-0)^2}=3\sqrt{37}$</span>. So, <span class="math-container">$AB=3\sqrt{37}$</span>. So, <span class="math-container">$$\sqrt{(15-18)^2+(k-3)^2}=3\sqrt{37}$$</span> <span class="math-container">$$\implies(-3)^2+(k-3)^2=333$$</span> <span class="math-container">$$\implies k^2-6k+9=324$$</span> <span class="math-container">$$\implies k^2-6k-315=0$$</span> <span class="math-container">$Using\\calculator$</span> <span class="math-container">$$k=21$$</span> or, <span class="math-container">$$k=-15$$</span></p> <p>Is my solution correct?</p> <p>I'm asking this because my book only mentions k=21 but not k=-15. But I don't find k=-15 to be an extraneous root as the equation holds true using -15 as the solution as well. So will the answer be both 21 and -15 or only -15?</p> <p>If there's any problem in my question please let me know. Thanks in advance! </p>
Community
-1
<p>Yes, you are: for all linear maps <span class="math-container">$T_1:V\to W$</span> and <span class="math-container">$T_2:W\to H$</span>, it is always the case that <span class="math-container">$\ker (T_2\circ T_1)\supseteq \ker T_1$</span>. In fact, if <span class="math-container">$T_1(x)=0$</span>, then <span class="math-container">$T_2(T_1(x))=T_2(0)=0$</span>.</p>
467,301
<p>I'm reading Intro to Topology by Mendelson.</p> <p>The problem statement is in the title.</p> <p>My attempt at the proof is:</p> <p>Since $X$ is a compact metric space, for each $n\in\mathbb{N}$, there exists $\{x_1^n,\dots,x_p^n\}$ such that $X\subset\bigcup\limits_{i=1}^p B(x_i^n;\frac{1}{n})$. Let $K=\frac{2p}{n}$. Then for each $x,y\in X$, $x\in B(x_i^n;\frac{1}{n})$ and $y\in B(x_j^n;\frac{1}{n})$ for some $i,j=1,\dots,p$. Thus, $d(x,y)\leq\frac{2p}{n}$.</p> <p>The approach I was taking is taking $K$ to be the addition of the diameters of each open ball in the covering for $X$, that way, for any two elements in $X$, the distance between them must be less than the overall length of the covering. Did I say this mathematically or are there holes I need to fill in?</p> <p>Thanks for any help or feedback!</p>
Elchanan Solomon
647
<p>Suppose $X$ consists of two points a distance of $100$ apart. Take $n=1$ and $p=2$. Your proof implies the two points are a distance of $4$ apart.</p>
467,301
<p>I'm reading Intro to Topology by Mendelson.</p> <p>The problem statement is in the title.</p> <p>My attempt at the proof is:</p> <p>Since $X$ is a compact metric space, for each $n\in\mathbb{N}$, there exists $\{x_1^n,\dots,x_p^n\}$ such that $X\subset\bigcup\limits_{i=1}^p B(x_i^n;\frac{1}{n})$. Let $K=\frac{2p}{n}$. Then for each $x,y\in X$, $x\in B(x_i^n;\frac{1}{n})$ and $y\in B(x_j^n;\frac{1}{n})$ for some $i,j=1,\dots,p$. Thus, $d(x,y)\leq\frac{2p}{n}$.</p> <p>The approach I was taking is taking $K$ to be the addition of the diameters of each open ball in the covering for $X$, that way, for any two elements in $X$, the distance between them must be less than the overall length of the covering. Did I say this mathematically or are there holes I need to fill in?</p> <p>Thanks for any help or feedback!</p>
Alex Youcis
16,497
<p>To round off the solutions, you can notice that $X\times X$ is compact, and $d:X\times X\to\mathbb{R}$ is continuous, and so obtains its max.</p>
2,069,001
<p>A team of seven netballers is to be chosen from a squad of twelve players A,B,D, E, F, G, H, I, J, K, L. In how many ways can they be chosen:<br> a) with no restriction This is fairly easy. 12C7 = 792</p> <p>b) if the captain C is to be included 11C6 = 462</p> <p>c) If J and K are both to be excluded 10C7 = 120</p> <p>d) If A is included but H is not 10C6 = 210</p> <p>e) if one of F and L is to be included and the other excluded.</p> <p>This one I'm having trouble with. I'm not 100% about the others either.</p>
hamam_Abdallah
369,188
<p><strong>Hint</strong></p> <p>It is F without L : $10C6 =210$</p> <p>or L without F : $10C6=210$</p> <p>the result is $210+210=420$.</p>
1,204,864
<blockquote> <p>$$\text{Find }\,\dfrac{d}{dx}\Big(\cos^2(5x+1)\Big).$$</p> </blockquote> <p>I have tried using the rules outlined in my standard derivatives notes but I've failed to find the point of application.</p>
3d0
217,450
<p>\begin{align*} \dfrac{dy}{dx} &amp; = \dfrac{d \cos^2(5x+1)}{dx}\\ &amp; = 2\cos(5x+1) \dfrac{d \cos(5x+1)}{dx}\\ &amp; = 2\cos(5x+1)(-\sin(5x+1))\dfrac{d(5x+1)}{dx}\\ &amp; = -2\cos(5x+1)\sin(5x+1) \cdot 5\\ &amp; = -10\cos(5x+1)\sin(5x+1)\\ &amp; = -5\sin(10x+2) \end{align*}</p>
982,938
<p>I was doing the integration $$ \int\frac{1}{(u^2+a^2)^2}du $$ and I had a look at the lecturer's steps where I got stuck in the following step: $$ \int\frac{1}{(u^2+a^2)^2}du=\frac{1}{2a^2}\left(\frac{u}{u^2+a^2}+\int\frac{1}{u^2+a^2}du\right). $$ I guess it is integrating this by parts, but I could't see the trick. Could somebody help me with this please?</p> <p>I had some work after I asked this question here. Here is my approach. </p> <p>First note $$ d\left(\frac{a^2}{u^2+a^2}\right)=-\frac{2a^2u}{(u^2+a^2)^2}du, $$ then we have $$ \int\frac{1}{(u^2+a^2)^2}du=\int\left(-\frac{1}{2a^2u}\right)\left(-\frac{2a^2u}{(u^2+a^2)^2}\right)du=\frac{1}{2a^2}\int\left(-\frac{1}{u}\right)d\left(\frac{a^2}{u^2+a^2}\right). $$ Then integrate by parts, we have $$ \int\left(-\frac{1}{u}\right)d\left(\frac{a^2}{u^2+a^2}\right)=-\frac{a^2}{u(u^2+a^2)}-\int\frac{a^2}{u^2(u^2+a^2)}du. $$ But notice $$ \frac{a^2}{u^2(u^2+a^2)}=\frac{1}{u^2}-\frac{1}{u^2+a^2}, $$ then we get $$ \int\frac{a^2}{u^2(u^2+a^2)}du=-\frac{1}{u}-\int\frac{1}{u^2+a^2}du, $$ then $$ \int\left(-\frac{1}{u}\right)d\left(\frac{a^2}{u^2+a^2}\right)=-\frac{a^2}{u(u^2+a^2)}+\frac{1}{u}+\int\frac{1}{u^2+a^2}du. $$ But $$ \frac{1}{u}-\frac{a^2}{u(u^2+a^2)}=\frac{u}{u^2+a^2}, $$ then $$ \int\frac{1}{(u^2+a^2)^2}du=\frac{1}{2a^2}\int\left(-\frac{1}{u}\right)d\left(\frac{a^2}{u^2+a^2}\right)=\frac{1}{2a^2}\left(\frac{u}{u^2+a^2}+\int\frac{1}{u^2+a^2}du\right). $$</p> <p>Is this approach okay? I am positive with this but not sure it is comprehensive. </p>
Shivang jindal
38,505
<p>Let $$I(a)= \int \frac{1}{a+x^2} = \frac{1}{\sqrt{a}}\arctan(\frac{x}{\sqrt{a}})$$ Then, $$I'(a) = \int \frac{-1}{(a+x^2)^2} = d/da( \frac{1}{\sqrt{a}}\arctan(\frac{x}{\sqrt{a}})) $$ Find the expression in RHS and then put $a=u^2$. And you are done! :)</p>
3,224,102
<p>For a given curve: <span class="math-container">$$C: \frac {ax^2+bx+c}{dx+e} $$</span> where <span class="math-container">$a,b,c,d,e$</span> are integers. Let <strong><span class="math-container">$f(x)=ax^2+bx+c$</span></strong> .</p> <hr> <p>Oblique asymptote can be found by long division of numerator by denominator.Here oblique asymptote is <strong>y=<span class="math-container">$\frac {a}{d}x$</span>+ <span class="math-container">$\frac {b}{d}$</span></strong>. </p> <hr> <h2>Now If I were to multiply <strong><span class="math-container">$y$</span></strong> with <strong><span class="math-container">$dx+e$</span></strong>. I will get <strong><span class="math-container">$ax^2+bx+q$</span></strong> , where <strong>q<span class="math-container">$\not=$</span>c</strong> which is close to my f(x) but this doesn't get me to my original <strong>f(x)</strong> where a constant value is different.So what needs to be done to retain exact f(x) part of a curve?Please help!!</h2>
Community
-1
<p>In order of appearence:</p> <ul> <li><p>"the oblique asymptote of <span class="math-container">$f(x)=\frac{ax^2+bx+c}{dx+e}$</span> can be found by computing the polynomial ling division": <strong>TRUE</strong></p></li> <li><p>"the result of the aforementioned operation is <span class="math-container">$\frac adx+\frac bd$</span>": <strong>FALSE</strong>; check your calculations.</p></li> <li><p>"there is some <span class="math-container">$q$</span> such that <span class="math-container">$\left(\frac adx+\frac bd\right)\left(dx+e\right)=ax^2+bx+q$</span>": <strong>FALSE</strong>, for the same reason as before.</p></li> <li><p>"if <span class="math-container">$\mu x+\rho$</span> is the oblique asymptote of <span class="math-container">$f(x)$</span>, it may be the case that <span class="math-container">$(\mu x+\rho)(cx+d)\ne ax^2+bx+c$</span>" : <strong>TRUE</strong>; it is in fact consistent with the definition of oblique asymptote (at <span class="math-container">$+\infty$</span>) being the affine function <span class="math-container">$G_{\mu,\rho}(x)=\mu x+\rho$</span> such that <span class="math-container">$\lim_{x\to\infty} f(x)-G_{\mu,\rho}(x)=0$</span>. The possibility that the infinitesimal quantity <span class="math-container">$f(x)-G_{\mu,\rho}(x)$</span> multiplied by <span class="math-container">$(cx+d)$</span> may become a non-zero constant is completely within expectations.</p></li> </ul>
1,087,080
<p>By definition, a closed set is a set that contains its limit points. However, by the time the closed set contains its limit points, those points are no longer limit points and become isolated points. For example:</p> <p>$\mathbf A = \{\frac{1}{n}: n \in \mathbb N \}$. The limit of this set (set $\mathbf A$) is clearly equal to $0$. This is because the $\epsilon$ -neighborhood $\mathbf V_{\epsilon}(0) \cap \mathbf A = \{\frac{1}{n} \}$, and $\frac{1}{n} \neq 0$. However, when $0$ is included, the $\epsilon$ -neighborhood $\mathbf V_{\epsilon}(0) \cap \mathbf A = \{0 \}$ for $\mathbf A=[0,\frac{1}{n} ]$. This will contradicts the definition of limit point of set A and hence $0$ must be an isolated point.</p> <p>Another example: $\left(a,b\right)$ is an open interval with limit $a$ and $b$. Then its closure $\bar A$ will be $\left[a,b\right]$. By definition, the $\epsilon$ -neighborhood of any point in $\left[a,b\right]$ intersects the closure $\bar A$ at that same point, and hence, no points in that closure set is a limit point: A contradiction that closure sets are closed sets.</p> <p>Also, I am trying to prove the lemma: If x is a limit point of $A \subseteq A'$, then x is limit point of $A'$. Proof: Suppose x is a limit point of $A$, then there exists a sequence $(a_n)$ $\subset A \subseteq A'$: lim($a_n$)=x with $a_n$ $\neq x \forall n \in \mathbb N$. Then since $(a_n)$ $\subset A'$, it follows that x must be a limit point of $A'$.</p> <p>So my questions are:</p> <p><strong>1. What is wrong with my contradiction in the 2 examples? Please explain them to me. 2. Is my proof for the lemma correct? I am going to use it for the proof that closure set is closed.</strong></p> <p>My background: I am studying elementary Real Analysis by starting with Abbot. I thank you very much for your help.</p> <p><strong>Extra question</strong>: We have this theorem: x is a limit point of set $A$ if and only if there exists a sequence $(a_n) \subset A$ such that $\lim (a_n)=x$ $\forall a_n \neq x$. So, for some finite $n \in \mathbb N$ such that $a_n = x$, x is still a limit point of set A . Is this correct? I thought that x would be an isolated points since we need $a_n \neq x \forall n \in \mathbb N$</p> <p>I thank you again for your answers.</p>
Mark Bennet
2,906
<p>In the first case your neighbourhood of zero includes $\frac 1{n+1}, \frac 1{n+2} \dots$</p> <p>In the second case I'm not sure what you mean. Any neighbourhood of a point in $[a,b]$ (we take $b\gt a$) intersects $[a,b]$ in (a set containing) an interval around the point. And any point within $[a,b]$ is a limit point.</p>
2,051,308
<p>i would be happy if someone would help me with something im trying to prove for my homework assignment.</p> <p><strong>the question:</strong> let V be a vector space,U is a subspace of V that is not equal to V, U$\neq${0}. let v be a vector from V. prove that it is not possible that every vector from V\U (from V but not from U) is a scalar multiplication of v.</p> <p>I thought about splitting the prove into 2 parts:</p> <p>1)if v is from U than it is easy beacuse every vector from U is still in U after multiplying it by a scalar.</p> <p>2) if v is from V\U.... i am stuck here</p> <p>thanks!</p>
user115350
334,306
<p>you have already proved (1). For (2), let's use contradiction. If it is possible that one special vector $v_s \in V\setminus U $ is a scalar multiple of a vector $v \in V$, then from (1), we know that the special vector $v_s \in V$, which contradicts to the assumption. So you can complete your proof. You are very close to it.</p>
2,051,308
<p>i would be happy if someone would help me with something im trying to prove for my homework assignment.</p> <p><strong>the question:</strong> let V be a vector space,U is a subspace of V that is not equal to V, U$\neq${0}. let v be a vector from V. prove that it is not possible that every vector from V\U (from V but not from U) is a scalar multiplication of v.</p> <p>I thought about splitting the prove into 2 parts:</p> <p>1)if v is from U than it is easy beacuse every vector from U is still in U after multiplying it by a scalar.</p> <p>2) if v is from V\U.... i am stuck here</p> <p>thanks!</p>
egreg
62,967
<p>It's correct that $v$ cannot belong to $U$. But there's a different way. Suppose such a $v$ exists. Then $$ V=U\cup\langle v\rangle $$ (where $\langle v\rangle$ denotes the subspace spanned by $v$).</p> <p>It is well known that if $U_1$ and $U_2$ are subspaces of a vector space, then $U_1\cup U_2$ is a subspace if and only if $U_1\subseteq U_2$ or $U_2\subseteq U_1$.</p>
396,713
<p>I'm attempting to derive a formula for the sum of all elements of an arithmetic series, given the first term, the limiting term (the number that no number in the sequence is higher than), and the difference between each term; however, I am unable to find one that works. Here is what I have so far:</p> <p>Let $a_0$ be the first term, $a_n$ be the last term, and $x$ be the difference between each term.</p> <p>If we have the sequence $a_0, a_0 + x, a_0 + 2x ... a_n$, then we can add the first and last term, the second and second last term, etc., to quickly find the sum based on the number of terms. Thus, the sum of these terms is $n\frac{(a_n + a_0)}{2}$, where $n$ is the total number of terms.</p> <p>The number of terms must be the number of times the first term was increased by $x$ plus one (to account for the first term), and so $n = \frac{(a_n - a_0)}{x} + 1$.</p> <p>Thus, the sum is equal to $(\frac{(a_n - a_0)}{x} + 1)\frac{(a_n + a_0)}{2}$.</p> <p>However, I am unable to integrate the limiting term in the place of $a_n$; any ideas for how to make this work?</p> <p>In case my definition of a limiting term is ambiguous; an example would be if there was a set $3, 6, 9$; I'd like to be able to replace $a_n$ (which is $9$ in this case) with any number above $9$, and below $12$, and still get the same answer.</p>
Peter Košinár
77,812
<p>Essentially, you want to "round" $a_n$ down to the greatest number which can be expressed as $a_0 + kx$ for some integer $x$. One way to do so is to replace $a_n$ by $(a_0+x\lfloor\frac{a_n-a_0}{x}\rfloor)$ in your summation formula.</p>
29,155
<p>Do we have a pullback operation on singular simplicial chains,that is if f:X-->Y is a continuous map between topological space X and Y,and C is a singular simplicial chain on Y,then do we have a singular simplicial chain on X which is the pullback of C along f?</p>
Gregory Arone
6,668
<p>For a general map, there is no such pullback operation, but there are things you can do in special cases. For example, if <span class="math-container">$f\colon X\to Y$</span> is a finite cover, there is a chain homomorphism <span class="math-container">$C(Y)\to C(X)$</span> that sends a singular simplex in <span class="math-container">$Y$</span> to the sum of its lifts in <span class="math-container">$X$</span>. This induces the transfer homomorphism in homology.</p> <p>There are more general versions of the transfer that can be realized on chain level. See for example the paper by Hans Munkholm: <a href="http://dx.doi.org/10.1007/BF01214044" rel="nofollow noreferrer"><strong>A chain level transfer homomorphism for PL fibrations</strong></a>, Math. Z. 166, 183-186 (1979). <a href="https://zbmath.org/?q=an:0404.55006" rel="nofollow noreferrer">ZBL0404.55006</a>.</p>
1,746,782
<p>This is what I've done:</p> <p>Let $s &lt; t$ and $F_t$ be a filtration adapted to $W(t)$ $$E[e^{t/2}\cos(W(t))|F_s] = e^{t/2} E[\cos(W(t)) - \cos(W(s)) + \cos(W(s))|F_s]$$ $$= e^{t/2} [E[\cos(W(t)) - \cos(W(s))|F_s] + \cos(W(s))]$$ Because of the independence of the increments: $$= e^{t/2} [E[\cos(W(t)) - \cos(W(s))] + \cos(W(s))]$$ This is where I'm stuck. I don't know how to calculate $E[\cos(W(t)) - \cos(W(s))]$. </p> <p>Following <a href="https://math.stackexchange.com/q/1596521/154124">the suggestion from Did in the comment</a> in this question, I can do this: $$E[\cos(W(t)) - \cos(W(s))] = \frac{1}{\sqrt{2\pi (t-s)}}\int_{\mathbb{R}}(\cos(x) - \cos(y)).e^{-(x-y)^2/2(t-s)}dx$$</p> <p>But I don't know if it is correct.</p> <p>Edit: OMG! I made an horrible mistake. The increments $W(t) - W(s)$ are independent, not $\cos(W(t)) - \cos(W(s))$. With the hints from <a href="https://math.stackexchange.com/a/143190">this answer</a> and the one from Siron everything got clearer now.</p>
Community
-1
<p>The function $u(t,x)=e^{t/2}\,\cos(x)$ satisfies the heat equation ${d\over dt}u+{1\over 2}\Delta_x u=0,$ so that, by Ito's formula, $u(t,W_t)=e^{t/2}\,\cos(W_t)$ is a martingale. </p>
280,156
<p>I have a code as below:</p> <pre><code>countpar = 10; randomA = RandomReal[{1, 10}, {countpar, countpar}]; randomconst = RandomInteger[{0, 1}, {countpar, 1}]; For[i = 1, i &lt; countpar + 1, i++, If[randomconst[[i, 1]] != 0, randomA[[All, i]] = 0.; randomA[[i, All]] = 0.; randomA[[i, i]] = 1; ]; ]; </code></pre> <p>The problem is when I have changed countpar, lets say, i.e, countpar=1000, then the computational time of the for loop increases dramatically. Is there a way to decrease this time from an expert eye ?</p> <p>Best Regards,</p> <p>Ahmet</p>
Bob Hanlon
9,362
<pre><code>x = 2; n = 2; </code></pre> <p>The general solution is</p> <pre><code>Clear[y]; y[m_] = RSolveValue[{y[ m] == (y[m - 1]^(x - 1) + n)/(y[m - 1]^(x - 1) + y[m - 1]^(x - 2)), y[0] == 1}, y[m], m] // Simplify (* ((1 - Sqrt[2])^m (-2 + Sqrt[2]) + (1 + Sqrt[2])^ m (2 + Sqrt[2]))/(-(1 - Sqrt[2])^(1 + m) + (1 + Sqrt[2])^(1 + m)) *) y /@ Range[0, 8] // Simplify (* {1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985} *) ymax = 6; Show[ Plot[Re@y[m], {m, 0, ymax}, PlotRange -&gt; All], DiscretePlot[y[m], {m, 0, ymax}, Filling -&gt; None, PlotStyle -&gt; Red]] </code></pre> <p><a href="https://i.stack.imgur.com/kQGrF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kQGrF.png" alt="enter image description here" /></a></p> <p>The real function passing through the points is</p> <pre><code>y2[m_] = Re[y[m]] // ComplexExpand // Simplify (* ((-1 + Sqrt[2])^(2 m) (-4 + 3 Sqrt[2]) - (1 + Sqrt[2])^( 2 m) (4 + 3 Sqrt[2]))/((-1 + Sqrt[2])^( 2 m) (-3 + 2 Sqrt[2]) - (1 + Sqrt[2])^(2 m) (3 + 2 Sqrt[2]) - 2 Cos[m π]) *) And @@ Table[y[m] == y2[m] // Simplify, {m, 0, 20}] (* True *) </code></pre>
3,882,457
<p><a href="https://arxiv.org/pdf/quant-ph/0208163" rel="nofollow noreferrer">These notes</a> are a great introduction to deformation quantization but I failed to check the validity of the statement p.9, right before (5.18).</p> <p><strong>Context:</strong> let <span class="math-container">$(\mathcal{A},+,\mu)$</span> be an algebra. <span class="math-container">$\mu:\mathcal{A}\times \mathcal{A} \to \mathcal{A}$</span> standing for multiplication. Deformation consists in considering a family (paramatrized by <span class="math-container">$\nu$</span> in a yet to be chosen space) of product on <span class="math-container">$\mathcal{A}[[\nu]]$</span> (formal power series with coefficients in <span class="math-container">$\mathcal{A}$</span>) generically given by <span class="math-container">$$ \forall\ f,g\in \mathcal{A},\quad \mu_{\nu}(f,g) := \mu(f,g) + \sum_{k=1}^{+\infty} \nu^k \mu_k (f,g) \label{1}\tag{1}$$</span> i.e. by a family of bilinear maps <span class="math-container">$\mu_k:\mathcal{A}\times \mathcal{A} \to \mathcal{A}$</span> satisfying some conditions and extended to elements <span class="math-container">$F, G\in \mathcal{A}[[\nu]]$</span> of the form <span class="math-container">$F=\sum_{k=1}^{+\infty} \nu^k f_{k},\ f_k \in \mathcal{A}$</span> by <span class="math-container">$\mathbb{K}[[\nu]]$</span>-bilinearity. (the idea behind formal power series, as far as I understand, is to ignore convergence issues but still have a structure where one can compare terms of the same degree in <span class="math-container">$\nu$</span>&quot;).</p> <p>Two of these <strong>star-product</strong> are equivalent if there exists an invertible algebra isomorphism (transition map) <span class="math-container">$T:(\mathcal{A}[[\nu]],+,\mu_{\nu}) \longrightarrow (\mathcal{A}[[\nu]],+,\rho_{\nu})$</span>, i.e. a map such that <span class="math-container">$$ \forall\ F,G \in \mathcal{A}[[\nu]], \quad T\big(\mu_{\nu}(F,G)\big)= \rho_{\nu}(T(F),T(G))$$</span></p> <hr /> <p><strong>Question:</strong> Let <span class="math-container">$\mathcal{A}=\mathcal{C}^{\infty}(\mathbb{R}^2)$</span> and denote <span class="math-container">$(a, \overline{a})$</span> or <span class="math-container">$(b, \overline{b}$</span>) the variables of the functions. I want to check that the <strong>normal product</strong> ( (5.4) p.8; with the more usual notation for products) <span class="math-container">$$f \ast_N g := \sum_{k=0}^{+\infty} \frac{\hbar^k}{k!} \frac{\partial^k f}{\partial a^k} \frac{\partial^k g}{\partial \overline{a}^k} = f\, e^{\hbar \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}}}\, g \label{2}\tag{2}$$</span> is equivalent to the <strong>Moyal product</strong> ((5.15) p.9, one can consider <span class="math-container">$\hbar$</span> as the deformation parameter... although there is usually the factor as in (\ref{5})) <span class="math-container">$$ f \ast_M g := \sum_{k=0}^{+\infty} \left(\frac{\hbar}{2} \right)^k \frac{1}{k!} \left. \left( \frac{\partial }{\partial a} \frac{\partial }{\partial \overline{b}} - \frac{\partial }{\partial \overline{a}} \frac{\partial }{\partial b}\right)^k f\big(a, \overline{a}\big) g\big(b, \overline{b}\big) \right|_{\genfrac{}{}{0pt}{1}{a=b}{\overline{a}=\overline{b}}} = f\, e^{\frac{\hbar}{2}\big( \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}} - \overleftarrow{\partial}_{\overline{a}} \overrightarrow{\partial}_{a} \big)}\, g \label{3}\tag{3}$$</span> i.e. <span class="math-container">$$ T\big(f \ast_N g \big) = T(f) \ast_M T(g)\quad \text{with}\quad T = \genfrac{}{}{0pt}{0}{"}{}\!\!\exp\left(-\frac{\hbar}{2} \frac{\partial^2}{\partial a\, \partial \overline{a}} \right)\genfrac{}{}{0pt}{0}{"}{} \label{4}\tag{4}$$</span></p> <p><strong>Remarks:</strong></p> <ul> <li>In fact I already checked (\ref{4}) up to second order in <span class="math-container">$\hbar$</span> but it did not work at order 3 (although I'm not sure as the calculations were quite tedious...). It was not a priori clear that (\ref{4}) hold, one could have the other way round <span class="math-container">$ T\big(f \ast_M g \big) = T(f) \ast_N T(g)$</span> instead but this seems to fail at order 1. I only want to check the first few orders, but I would gladly take a proof for all order. I will soon write what I have done, but as I mentionned, it's tedious.</li> <li>The Moyal product is first defined in the text (3.5-3.6) p.5 by <span class="math-container">$$ f \ast_M g := \sum_{k=0}^{+\infty} \frac{\nu^k}{k!} \underbrace{\left(\frac{\partial }{\partial q_1} \frac{\partial }{\partial p_{2}} - \frac{\partial }{\partial p_{1}} \frac{\partial }{\partial q_2} \right)^k f(q_1,p_1)\, g(q_2,p_2)}_{\mu_k(f,g)}\left.\vphantom{\frac{T}{T}}\right|_{\genfrac{}{}{0pt}{1}{q_1=q_2}{p_{1}=p_{2}}}\quad \text{with}\quad \nu = \frac{i\hbar}{2} \label{5}\tag{5}$$</span> and it does coincide with (\ref{3}) via (these are the correct <span class="math-container">$\sqrt{2}$</span> factors...) <span class="math-container">$$ \left\lbrace \begin{aligned} a &amp; := \frac{1}{\sqrt{2}} \left(q + i\hspace{.5pt} p \right) \\ \overline{a} &amp; := \frac{1}{\sqrt{2}} \left( q - i\hspace{.5pt} p \right) \end{aligned} \right. \enspace \Longrightarrow\quad \left\lbrace \begin{aligned} \frac{\partial}{\partial\hspace{.7pt} q} &amp; = \frac{\partial\hspace{.7pt} a}{\partial\hspace{.7pt} q} \frac{\partial}{\partial\hspace{.7pt} a} + \frac{\partial\hspace{.7pt} \overline{a}}{\partial \hspace{.7pt} q} \frac{\partial}{\partial\hspace{.7pt} \overline{a}} =\frac{1}{\sqrt{2}} \left( \frac{\partial}{\partial\hspace{.7pt} a} + \frac{\partial}{\partial\hspace{.7pt} \overline{a}} \right) \\ \frac{\partial}{\partial\hspace{.7pt} p} &amp; = \frac{\partial\hspace{.7pt} a}{\partial\hspace{.7pt} p} \frac{\partial}{\partial\hspace{.7pt} a} + \frac{\partial\hspace{.7pt} \overline{a}}{\partial \hspace{.7pt} p} \frac{\partial}{\partial\hspace{.7pt} \overline{a}} = \frac{i}{\sqrt{2}} \left( \frac{\partial}{\partial\hspace{.7pt} a} - i\, \frac{\partial}{\partial\hspace{.7pt} \overline{a}}\right) \end{aligned} \right. $$</span></li> <li>To make (\ref{3}) (same for (\ref{5})) more explicit, let me write the <span class="math-container">$k=2$</span> term: (notation <span class="math-container">$\displaystyle \partial_a=\frac{\partial}{\partial a},\ \partial_{ab}= \frac{\partial^2}{\partial a \partial b}$</span> etc.) <span class="math-container">$$\begin{split} \mu_2(f,g) &amp;= \Big(\partial_{aa\overline{b}\overline{b}} - 2 \partial_{a\overline{a}b\overline{b}} + \partial_{\overline{a}\overline{a}bb} \Big) f(a,\overline{a})g(b,\overline{b})\left.\vphantom{\frac{T}{T}}\right|_{\genfrac{}{}{0pt}{1}{a=b}{\overline{a}=\overline{b}}}\\ &amp;= (\partial_{aa}f)(\partial_{\overline{a}\overline{a}}g) - 2 (\partial_{a\overline{a}}f)(\partial_{b\overline{b}}g) + (\partial_{\overline{a}\overline{a}}f) (\partial_{aa}g) \end{split} \label{6}\tag{6}$$</span> One can also use the <span class="math-container">$\overleftarrow{\partial}$</span> or <span class="math-container">$\overrightarrow{\partial}$</span> notations or a tensorial notation.</li> </ul>
Noix07
92,038
<p>It is in fact possible to prove<br /> <span class="math-container">$$\begin{split} T\big(f \ast_N g \big) = T(f) &amp;\ast_M T(g)\quad \text{with}\quad T = \genfrac{}{}{0pt}{0}{"}{}\!\!\exp\left(-\frac{\hbar}{2} \frac{\partial^2}{\partial_{a}\, \partial_{\overline{a}} } \right)\genfrac{}{}{0pt}{0}{"}{} \\ \Longleftrightarrow \quad e^{-\frac{\hbar}{2} \partial_{ a\, \overline{a}}} \left(f\, e^{\hbar \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}} }\, g \right) &amp; = \left(e^{-\frac{\hbar}{2} \partial_{ a\, \overline{a}}} f\right) e^{\frac{\hbar}{2}\big( \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}} - \overleftarrow{\partial}_{\overline{a}} \overrightarrow{\partial}_{a} \big)} \left(e^{-\frac{\hbar}{2} \partial_{ a\, \overline{a}}} g\right) \end{split}\tag{4}$$</span> by brute force. The idea is to reckognize the simplification that occurs for the exponents of exponential functions suggested by Cosmas but in terms of series: <span class="math-container">$$\begin{split} \text{L.h.s.} &amp;= e^{-\frac{\hbar}{2} \partial_{ a\, \overline{a}}} \left(f\, e^{\hbar m s }\, g \right) = e^{-\frac{\hbar}{2} \partial_{ a\, \overline{a}}} \left( e^{\big(ma + n\overline{a} \big) + \hbar m k + \big(ka + s\overline{a} \big)} \right)\\ &amp; = e^{-\frac{\hbar}{2} (m+k)(n+s)} \left(f\, e^{\hbar m s }\, g \right)\\ &amp;= \left( e^{-\frac{\hbar}{2} m(n+s)} f\right) e^{\hbar m s } \left(\,e^{-\frac{\hbar}{2} k(s+n)} g \right) \end{split} \label{4L}\tag{4L}$$</span></p> <p><span class="math-container">$$ \begin{split} \text{R.h.s.} &amp;= \left(e^{-\frac{\hbar}{2} mn\ + \big(ma + n\overline{a}\big)} \right) e^{\frac{\hbar}{2}\big( \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}} - \overleftarrow{\partial}_{\overline{a}} \overrightarrow{\partial}_{a} \big)} \left(e^{-\frac{\hbar}{2} ks\ + \big(ka + s\overline{a}\big)}\right) \\ &amp;= \left(e^{-\frac{\hbar}{2} mn\ + \big(ma + n\overline{a}\big)} \right) e^{\frac{\hbar}{2}\big( ms - nk \big)} \left(e^{-\frac{\hbar}{2} ks\ + \big(ka + s\overline{a}\big)}\right) \end{split} \label{4R}\tag{4R}$$</span> The simplification for the exponents to go from L.h.s. to R.h.s. <span class="math-container">$-\frac{\hbar}{2} ms + \hbar m s -\frac{\hbar}{2} kn = \frac{\hbar}{2} (ms-nk) $</span>.</p> <p><em>Remark: One could probably exploit this argument by decomposing a certain class of functions in terms of exponentials, probably with a Laplace transform in analogy with the case of complex exponentials: a tempered distribution can be written <span class="math-container">$u =\mathcal{F}^{-1}(\hat{u})=\genfrac{}{}{0pt}{0}{"}{}\!\!\frac{1}{(2\pi)^2}\int_{\mathbb{R}^2}\hat{u}(k,\overline{k})\, e^{ika +i\overline{k}\overline{a}}\, dk\, d\overline{k}\, \genfrac{}{}{0pt}{0}{"}{}$</span>.</em></p> <p>Notice that Leibniz' rule can be written: <span class="math-container">$\partial^j_{a}(fg) =f \left(\overleftarrow{\partial}_a +\overrightarrow{\partial}_a \right)^j g$</span> and similarly with two variables: <span class="math-container">$\partial^j_{a\overline{a}}(fg)=f \left(\overleftarrow{\partial}_a +\overrightarrow{\partial}_a \right)^j \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^j g$</span>. Hence <span class="math-container">$$\begin{split} \text{L.h.s.} &amp;=\sum_{j=0}^{+\infty} \frac{1}{j!} \left(-\frac{\hbar}{2} \partial_{ a\, \overline{a}}\right)^j \left(\sum_{k=0}^{+\infty} \frac{\hbar^k}{k!} \partial^k_a f\, \partial^k_{\overline{a}} g\right)\\ &amp;= \sum_{j=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^j \frac{1}{j!} f\left(\overleftarrow{\partial}_a +\overrightarrow{\partial}_a \right)^j \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^j \left(\sum_{k=0}^{+\infty} \frac{\hbar^k}{k!} \overleftarrow{\partial}^k_a \, \overrightarrow{\partial}_{\overline{a}} ^k \right) g \\ &amp;= f \left( \sum_{j=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^j \frac{1}{j!} \left(\sum_{0\leq i\leq j} {j\choose i} \overleftarrow{\partial}_a^{j-i} \overrightarrow{\partial}_a^i \right) \left(\sum_{k=0}^{+\infty} \frac{\hbar^k}{k!} \overleftarrow{\partial}^k_a \, \overrightarrow{\partial}_{\overline{a}} ^k \right) \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^j \right) g \\ &amp;= f \left( \sum_{\genfrac{}{}{0pt}{1}{j,k=0}{i+l=j}}^{+\infty} \left(-\frac{\hbar}{2}\right)^i \!\! \frac{1}{i!} \overleftarrow{\partial}_a^{i}\ \left(-\frac{\hbar}{2}\right)^l\!\! \frac{1}{l!} \overrightarrow{\partial}_a^l \left( \frac{\hbar^k}{k!} \overleftarrow{\partial}^k_a \, \overrightarrow{\partial}_{\overline{a}} ^k \right) \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^{i+l} \right) g \\ &amp;=f \underbrace{\left( \sum_{i=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^i \!\! \frac{1}{i!} \overleftarrow{\partial}_a^{i} \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^i \right)}_{"e^{-\frac{\hbar}{2} m(n+s)}"} \underbrace{\left(\sum_{k=0}^{+\infty} \frac{\hbar^k}{k!} \overleftarrow{\partial}^k_a \, \overrightarrow{\partial}_{\overline{a}} ^k \right)}_{"e^{\hbar ms}"} \underbrace{\left( \sum_{l=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^l\!\! \frac{1}{l!} \overrightarrow{\partial}_a^l \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^{l} \right)}_{"e^{-\frac{\hbar}{2} k(s+n)}"} g \end{split}$$</span> Now it suffices to check the equivalent of <span class="math-container">$e^{-\frac{\hbar}{2} m(n+s)} = e^{-\frac{\hbar}{2} mn} e^{-\frac{\hbar}{2} ms},\ e^{-\frac{\hbar}{2} ms} e^{\hbar ms}= e^{\frac{\hbar}{2} ms}$</span> and <span class="math-container">$e^{\frac{\hbar}{2} ms} e^{-\frac{\hbar}{2} nk}= e^{\frac{\hbar}{2} (ms-nk)}$</span>. Since they are all similar, let us just write the first one: <span class="math-container">\begin{equation} \begin{split} &amp; f \left( \sum_{i=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^i \!\! \frac{1}{i!} \overleftarrow{\partial}_a^{i} \left(\overleftarrow{\partial}_{\overline{a}} +\overrightarrow{\partial}_{\overline{a}} \right)^i \right) g\\ &amp;= f \left( \sum_{\genfrac{}{}{0pt}{1}{i=0}{0\leq h\leq i}}^{+\infty} \left(-\frac{\hbar}{2}\right)^{i-h+h} \! \overleftarrow{\partial}_a^{i-h+h} \, \frac{1}{(i-h)!} \overleftarrow{\partial}^{i-h}_{\overline{a}}\, \frac{1}{h!} \overrightarrow{\partial}_{\overline{a}}^h \right) g\\ &amp;= f \left( \sum_{\genfrac{}{}{0pt}{1}{i=0}{h +l =i}}^{+\infty} \left(-\frac{\hbar}{2}\right)^{l} \! \frac{1}{l!} \overleftarrow{\partial}^{l}_{a\overline{a}}\ \left(-\frac{\hbar}{2}\right)^{h} \! \frac{1}{h!} \overleftrightarrow{\partial}^h_{\overline{a}} \right) g\\ &amp;= f \left( \sum_{l=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^{l} \! \frac{1}{l!} \overleftarrow{\partial}^{l}_{a\overline{a}}\right) \left(\sum_{h=0}^{+\infty} \left(-\frac{\hbar}{2}\right)^{h} \! \frac{1}{h!} \overleftarrow{\partial}^h_a \overrightarrow{\partial}_{\overline{a}}^h \right) g \end{split} \end{equation}</span></p> <hr /> <p>I also thought about the fact that a linear map from certain classes of functions of <span class="math-container">$\mathbb{R}^n$</span> (different versions, e.g. <span class="math-container">$\mathcal{C}^{\infty}_c(\mathbb{R}^n)$</span>) which commutes with translations is a convolution operator (there is also a continuity condition, so one has to specify the topology on the space of functions, and this will constrain the convolution operator: convolution with what kind of function or distribution). Unfortunately, a quick computation shows that it would be a convolution with the Fourier transform of <span class="math-container">$e^{\frac{\hbar}{2}k^2}$</span> which does not make sense... <span class="math-container">$$ \begin{split} T(f) &amp;= e^{-\frac{\hbar}{2} \partial_{ a\, \overline{a}}} \frac{1}{(2\pi)^2} \int_ {\mathbb{R}^2} \hat{f}(k,\overline{k})\, e^{ika +i\overline{k}\overline{a}}\, dk\, d\overline{k} = \frac{1}{(2\pi)^2} \int_ {\mathbb{R}^2} \hat{f}(k,\overline{k})\,e^{\frac{\hbar}{2} k\overline{k}}\, e^{ika +i\overline{k}\overline{a}}\, dk\, d\overline{k} \\ &amp;= \frac{1}{(2\pi)^2} \int_ {\mathbb{R}^2} \left(\int_ {\mathbb{R}^2} f(b,\overline{b})\,e^{-ikb -i\overline{k}\overline{b}} db\, d\overline{b}\right) e^{\frac{\hbar}{2} k\overline{k}}\, e^{ika +i\overline{k}\overline{a}}\, dk\, d\overline{k}\\ &amp;= \frac{1}{(2\pi)^2} \int_ {\mathbb{R}^2} f(b,\overline{b}) \left(\int_ {\mathbb{R}^2} e^{\frac{\hbar}{2} k\overline{k}}\, e^{ik(a-b) +i\overline{k}(\overline{a}-\overline{b})}\, dk\, d\overline{k}\right) db\, d\overline{b} \end{split}$$</span> Writing things differently also leads to the same result: <span class="math-container">$$\begin{split} T(f) &amp;= \mathcal{F}^{-1}\circ\mathcal{F}\left(\sum_{j=0}^{+\infty} \frac{1}{j!} \left(-\frac{\hbar}{2} \partial_{ a\, \overline{a}}\right)^j f\right) \\ &amp;=\sum_{j=0}^{+\infty} \frac{1}{j!} \left(-\frac{\hbar}{2}\right)^j (ik)^j (i\overline{k})^j f \end{split}$$</span></p> <p><strong>Way out:</strong> <span class="math-container">$T$</span> seems to be unbounded but one can always imagine that its &quot;inverse&quot; (we didn't define its domain and target space...) could be convolution by a Gaussian. Another possibility would be to take a modified Fourier transform, <span class="math-container">$ \hat{f}(k,\overline{k}) := \int_ {\mathbb{R}^2} f(a,\overline{a})\, e^{- ika +i\overline{k}(\overline{a}-\overline{b})}\, da\, d\overline{a}$</span> or as remarked, take a Laplace transform. cf. also <a href="https://en.wikipedia.org/wiki/Weierstrass_transform#The_inverse_transform" rel="nofollow noreferrer">Inverse Weierstrass transform</a></p>
1,384,947
<p>I'm trying to figure out when numbers reach "periodicity" given known values. I've included an example below with image:</p> <p>I have known sizes (<em>100, 75, and 50</em>) that I would like to know how many times I would need to repeat each item for all the sizes to line up or be periodic. Does anyone know of a formula for this or how I can go about figuring this out?</p> <pre><code>As you can see to reach periodicity: I need to repeat 100 3 times I need to repeat 75 4 times I need to repeat 50 6 times </code></pre> <p><a href="https://i.stack.imgur.com/1WeEp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1WeEp.jpg" alt="image"></a></p> <p><strong>PS: This is just a simple example the numbers could be decimals like 1.29. and include several more numbers. I will also be converting the formula to octave which is like matlab.</strong></p>
CopperKettle
126,401
<p>Using Anurag A's <a href="https://math.stackexchange.com/questions/1384945/a-puzzling-step-in-a-solution-for-find-sinx-and-cosx-if-a-sinxb#comment2820024_1384945">hint</a>, </p> <p>$$\sin(x-\phi)=\pm\sqrt{1-\cos^2(x-\phi)}$$ Since $$\left(\frac{a}{\sqrt{a^2+b^2}}\right)^2+\left(\frac{b}{\sqrt{a^2+b^2}}\right)^2=1,$$ $$\sin(x-\phi)=\pm\sqrt{\left(\frac{a}{\sqrt{a^2+b^2}}\right)^2+\left(\frac{b}{\sqrt{a^2+b^2}}\right)^2-\left(\frac{c}{\sqrt{a^2+b^2}}\right)^2};$$ $$\sin(x-\phi)=\pm\frac{\sqrt{a^2+b^2-c^2}}{\sqrt{\left(\sqrt{a^2+b^2}\right)^2}}=\pm\frac{\sqrt{a^2+b^2-c^2}}{\sqrt{a^2+b^2}}$$</p>
1,384,947
<p>I'm trying to figure out when numbers reach "periodicity" given known values. I've included an example below with image:</p> <p>I have known sizes (<em>100, 75, and 50</em>) that I would like to know how many times I would need to repeat each item for all the sizes to line up or be periodic. Does anyone know of a formula for this or how I can go about figuring this out?</p> <pre><code>As you can see to reach periodicity: I need to repeat 100 3 times I need to repeat 75 4 times I need to repeat 50 6 times </code></pre> <p><a href="https://i.stack.imgur.com/1WeEp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1WeEp.jpg" alt="image"></a></p> <p><strong>PS: This is just a simple example the numbers could be decimals like 1.29. and include several more numbers. I will also be converting the formula to octave which is like matlab.</strong></p>
Obinoscopy
255,445
<p>$$\sin^2(x-\phi)+\cos^2(x-\phi)=1$$ $$\sin(x-\phi)=\pm\sqrt{1-\cos^2(x-\phi)}$$ But $\cos(x-\phi)=\frac{c}{\sqrt{a^2+b^2}}$</p> <p>Therefore $\sin(x-\phi)=\pm\sqrt{1-\cos^2(x-\phi)}=\pm\sqrt{1-(\frac{c}{\sqrt{a^2+b^2}})^2}=\pm\sqrt{1-\frac{c^2}{a^2+b^2}}$</p> <p>This gives: $\pm\sqrt{\frac{a^2+b^2}{a^2+b^2}-\frac{c^2}{a^2+b^2}}=\pm\sqrt{\frac{a^2+b^2-c^2}{a^2+b^2}}$</p> <p>I believe this is explanatory enough</p>
2,563,966
<p>I'm reading a proof in a linear algebra book. It mentions $$p(x) -p(c)= (x - c) h(x),$$ where $c$ is a constant, and $p(x)$ and $h(x)$ are polynomials.</p> <p>Can we always factor $p(x) - p(c)$ in this way? </p> <p>Please give a proof.</p>
egreg
62,967
<p>If $R$ is a commutative ring, $f(x)$ is a polynomial in $R[x]$ and $c\in R$, then</p> <blockquote> <p>$f(x)=(x-c)g(x)$ for some $g(x)\in R[x]$ if and only if $f(c)=0$</p> </blockquote> <p>Indeed, long division of $f(x)$ by $x-c$ is possible because $x-c$ is monic; so $f(x)=(x-c)g(x)+r$, where $r\in R$. The conclusion is now easy.</p> <p>What can you say about $f(x)=p(x)-p(c)$?</p>
639,028
<blockquote> <p>Calculate partial derivative $f'_x, f'_y, f'_z$ where $f(x, y, z) = x^{\frac{y}{z}}.$</p> </blockquote> <p>I know I need to use the chain rule but I'm confusing here for some reason ..</p> <p>By <a href="http://tutorial.math.lamar.edu/Classes/CalcIII/ChainRule.aspx" rel="nofollow">this page</a>, the chain rule for $z = f(x, y), x = g(t), y = h(t)$ is:</p> <blockquote> <p>$\frac{\partial z}{\partial t} = \frac{\partial f}{\partial x} \frac{\partial x}{\partial t} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial t}$ </p> </blockquote> <p>I tired to find partial derivatives using this formula but I confused ..</p> <p>Can you please help me find $f'_x, f'_y, f'_z$? Exam tomorrow .. :\</p>
heropup
118,193
<p>It might be instructive to determine, for a given base $y &gt; 1$, those values $x &gt; 0$ such that $f_y(x) = \lceil \log_y \lceil x \rceil \rceil \ne g_y(x) = \lceil \log_y x \rceil$. First, it is easy to see from the fact that $\log$ is monotone increasing that $f_y(x) \ge g_y(x)$ for any base $y &gt; 1$. Furthermore, we note that if $f_y(x) = g_y(x) = n$, then both $\log_y \lceil x \rceil$ and $\log_y x$ must lie in the same interval $(n-1, n]$; that is to say, both $\lceil x \rceil$ and $x$ must lie in the interval $(y^{n-1}, y^n]$. Is this always true? Of course not: if $y$ is not an integer, then it is easy to see that $x \in (\lfloor y^n \rfloor, y^n]$ implies $\lceil x \rceil &gt; y^n$.</p> <p>But what if $y$ <em>is</em> an integer? Then we almost get away, but remember there is the interval $x \in (0,1)$. In such a case, $\lceil x \rceil = 1$ and $f_y(x) = 0$, but $g_y(x)$ is not zero unless $x &gt; y^{-1}$. So we always have a counterexample to the asserted identity for $x \in (0, 1/y]$, regardless of whether $y$ is an integer.</p>
639,028
<blockquote> <p>Calculate partial derivative $f'_x, f'_y, f'_z$ where $f(x, y, z) = x^{\frac{y}{z}}.$</p> </blockquote> <p>I know I need to use the chain rule but I'm confusing here for some reason ..</p> <p>By <a href="http://tutorial.math.lamar.edu/Classes/CalcIII/ChainRule.aspx" rel="nofollow">this page</a>, the chain rule for $z = f(x, y), x = g(t), y = h(t)$ is:</p> <blockquote> <p>$\frac{\partial z}{\partial t} = \frac{\partial f}{\partial x} \frac{\partial x}{\partial t} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial t}$ </p> </blockquote> <p>I tired to find partial derivatives using this formula but I confused ..</p> <p>Can you please help me find $f'_x, f'_y, f'_z$? Exam tomorrow .. :\</p>
Christian Blatter
1,303
<p>The statement only makes sense when $y&gt;1$ and $x&gt;0$.</p> <p>When $0&lt;x\leq{1\over y}$ the statement is <strong>wrong</strong>: The left side is $=0$ and the right side $\leq-1$.</p> <p>When ${1\over y}&lt;x\leq1$ then both sides are $=0$. So from now on we assume $x&gt;1$.</p> <p>Let $y&gt;1$ and $t&gt;1$ be real, and let $n$ be integer. Then $$\lceil\log_y(t)\rceil=n\quad\Leftrightarrow\quad n-1&lt;\log_y(t)\leq n \quad\Leftrightarrow\quad y^{n-1}&lt; t\leq y^n\ ;$$ furthermore the third statement implies $n\geq1$. Therefore we have to prove the following:</p> <p>When $y\in{\mathbb N}_{\geq2}$ and $x&gt;1$ then $$y^{n-1}&lt; \lceil x\rceil\leq y^n\quad\Leftrightarrow\quad y^{n-1}&lt;x\leq y^n\ .\tag{1}$$ <em>Proof</em> of $\Rightarrow:\ $ Since $n\geq1$ the number $y^{n-1}$ is an integer. It follows that $\lceil x\rceil\geq y^{n-1}+1$ and therefore $x&gt;\lceil x\rceil-1\geq y^{n-1}$. On the other hand $x\leq\lceil x\rceil\leq y^n$.</p> <p>The converse is similar.</p> <p>Summing it all up we can say that the statement is <strong>true</strong> when $y\in{\mathbb N}_{\geq2}$ and $x&gt;{1\over y}$.</p>
167,848
<p>I am trying to solve numerically an equation and generate some results. I use the following code </p> <pre><code>u[c_] := (c^(1 - σ) - 1)/(1 - σ) f[s_] := g s (1 - s/sbar1) h[s_] := (2 hbar)/(1 + Exp[η (s/sbar - 1)]) co[a_] := ϕ (a^2)/2 ψ[k_] := wbar (ω + (1 - ω) Exp[-γ k]) </code></pre> <p>The equation I try to solve is the following</p> <pre><code>adap[k_, s_] := (ρ + δ) u'[f[s] - priceadap δ k] co'[δ k] + ψ'[k] h[s] </code></pre> <p>I have the following constant parameter set </p> <pre><code>paramFinal2 = {σ -&gt; 1.7, ρ -&gt; 0.025, g -&gt; 0.05, sbar -&gt; 10, η -&gt; 11, hbar -&gt; 0.5, priceadap -&gt; 0.0006, γ -&gt; 0.6, χ -&gt; 1000, ϕ -&gt; 0.05, ω -&gt; 0.35, β -&gt; 0.8, δ -&gt; 0.065, sbar1 -&gt; 10, wbar -&gt; 1000}; </code></pre> <p>So, for different values of $s$, I try to generate the corresponding values of $k$.</p> <p>For this, I use the following code </p> <pre><code>tmax1 = 10; solK[i_] := Solve[adap[k, i] == 0 /. paramFinal2, k]; Table[solK[i], {i, 1, tmax1}]; </code></pre> <p>Unfortunately, this does not give any result. Mathematica is always on mode "Running...".</p> <p>P.S I am using Mathematica 9.0</p>
m_goldberg
3,066
<p>Neither <code>Solve</code> nor <code>NSolve</code> will handle your equation. As Hugh has shown, you can use <code>FindRoot</code>. You could also use <code>FindInstance</code> as follows.</p> <pre><code>solK[i_] := FindInstance[adap[k, i] == 0 /. paramFinal2, k, Reals] With[{tmax1 = 10}, Flatten @ Table[solK[i], {i, tmax1}]][[All, 2]] </code></pre> <blockquote> <pre><code>{10.7363, 12.1674, 12.8497, 13.185, 13.2841, 13.169, 12.797, 12.0139, 10.3238} </code></pre> </blockquote>
767,686
<p>Let $f:A\rightarrow B$ be a function and let $C_1,C_2\subset A$ Prove that</p> <p>$f(C_1\cap C_2)=f(C_1)\cap f(C_2) \leftrightarrow$ $f$ is injective</p> <p>Attempt:</p> <p>$(\leftarrow)$ Let $f(x)\in f(C_1\cap C_2)$. Then there exists $x\in C_1\cap C_2$ because $f$ is injective. So $x\in C_1$ and $x\in C_2$. So $f(x)\in f(C_1)$ and $f(x)\in f(C_2)$. So $f(x)\in f(C_1)\cap f(C_2)$. Thus $f(C_1\cap C_2)=f(C_1)\cap f(C_2)$ I got to this but don't have an idea on how to prove the other direction..</p>
user2566092
87,313
<p>Hint: The tricky part is showing that $h(1) \neq 1$. Argue that if $h(1) = 1$ then you must have $h'(x) = 1$ for all $0 \leq x &lt; 1$ and $h'(x) = -1$ for all $1 &lt; x \leq 2$. But then how can $h$ be differentiable at $x = 1$?</p>
64,395
<p>Let G be a directed graph on N vertices chosen at random, conditional on the requirement that the out-degree of each vertex is 1 and the in-degree of each vertex is either 0 or 2. The "periodic" points of G are those contained in a cycle. What do we know about the statistics of G? For instance, what is the mean number of periodic points, and how do the cycle lengths look?</p> <p>By comparison, a random directed graph with all out-degrees 1 (which is to say, the graph of a random function from vertices to vertices) has on order of sqrt(N) periodic points on average.</p> <p>(Motivation: the graph of a quadratic rational function f acting on P^1(F_q) looks like this, and I'm wondering what the "expected" dynamics are.) </p>
James Cranch
14,901
<p>To calculate expected numbers of cycle of each length, you should be able to just wade in and use Stirling's approximation to deal with the results.</p> <p>Let's write $N=2t$. Then there are $t$ vertices with indegree 0 and $t$ with indegree 2. The number of such graphs is $(2t)!/2^t$.</p> <p>If $a_1,\ldots,a_m$ are vertices with indegree 2, then the number of ways that they can be a cycle (in that order) is $$\binom{2t-m}{m}\frac{(2t-2m)!}{2^{t-m}},$$ since we first choose other inputs for these vertices, and then deal with what's left.</p> <p>Then the expected number of $m$-cycles is obtained by multiplying that by $t!/(t-m)!$ and dividing by the number of such graphs. So you can get an exact formula, and then deploy Stirling on it.</p>
248,182
<p>Some textbooks I've seen declare inequalities such as $-2&gt;x&gt;2$ to have no solution, or to be ill-defined, which I disagree with. I'm curious to know if anyone else thinks the same.</p> <p>Inequalities can always be written two ways. For example, $x&gt;2$ is the same as $2&lt;x$. So far as I understand, the same applies to compound inequalities; for example, everyone would regard $-3&lt;x&lt;3$ to be well-defined, and it can be written "backwards" as $3&gt;x&gt;-3$.</p> <p>When someone interprets $-3&lt;x&lt;3$, upon reflection, it is understood that there is an implicit intersection behind the scenes, as it can be read out-loud as "$-3&lt;x$ and $x&lt;3$." And when they interpret $3&gt;x&gt;-3$, it is the "backwards" version of $-3&lt;x&lt;3$. Both are two different, compact ways of expressing {$ x&lt;3 $} $\cap$ {$ x&gt;-3 $}.</p> <p>So when I look at an inequality such as $-2&gt;x&gt;2$, I take it to mean there is an implicit union behind the scenes. In other words, $-2&gt;x&gt;2$ and $2&lt;x&lt;-2$ both refer to the same thing, namely {$ x&lt;-2 $} $\cup$ {$ x&gt;2 $}. Were I to read $-2&gt;x&gt;2$ out-loud, I would read it as "$-2&gt;x$ or $x&gt;2$."</p> <p>Am I crazy, or is there something wrong with this interpretation?</p> <p>It seems to offer some advantages. For example, it makes the solution of certain absolute value inequalities very easy and natural.</p>
Alex R.
22,064
<p>You can write inequalities any way you want if you think of the elements satisfying the inequality as members of a set combined with a truth table. There isn't any ambiguity in writing $-2&gt;x&gt;2$ or $-2&gt;x&lt;5$ as long as you read it left to right, or right to left in a PAIRWISE fashion: $-2&gt;x$ and $x&gt;2$ or $-2&gt;x$ and $x&lt;5$ respectively.. If you the write $\{x&lt;-2\}\cap \{x&gt;2\}$ you will realize that this intersection is the empty set, that there are no $x$ which satisfy the inequality. On the the other hand, if you write $-2&gt;2$, this can be interpreted as a true or false statement, in this case being false. </p> <p>If you have something complicated like:</p> <p>$-2&gt;-3&lt;5&gt;2$, again it will be unambiguous if you read it left to right or right to left, in a pairwise fashion. In other words, $-2&gt;-3$, $-3&lt;5$, $5&gt;2$. It WILL be ambigious otherwise, because for example, are you saying $-2&gt;2$ as well as $-2&gt;2$? </p>
467,268
<p>Any body knows the meaning of this expectation ($E[g(x)]$) form?</p> <p>$E[g(x)]=Pr(g(x) &gt;\varepsilon)E[g(x)|g(x) &gt; g(\varepsilon)]+Pr(g(x) \leq\varepsilon)E[g(x)|g(x)\leq g(\varepsilon)]$</p>
Community
-1
<p><strong>Hint</strong>: Find the point from where the given line passes and use the slope point form to find the required equation of straight line.</p> <p><strong>Solution:-</strong></p> <p>We have,</p> <blockquote> <ul> <li>Gradient of line <span class="math-container">$(m) = -2$</span></li> <li>The line passes through the mid-point of the line joining <span class="math-container">$(5,-2), (-3,4)$</span></li> </ul> </blockquote> <p>The mid-point is given by,</p> <p><span class="math-container">$\longrightarrow (x_1, y_1) = \left(\dfrac{5-3}{2}, \dfrac{-2+4}{2}\right)$</span></p> <p><span class="math-container">$\longrightarrow (x_1, y_1) = (1,1) $</span></p> <p>Now, using point slope form of straight line:-</p> <p><span class="math-container">$\longrightarrow y-y_1 = m(x-x_1)$</span></p> <p><span class="math-container">$\longrightarrow y-1 = (-2)(x - 1)$</span></p> <p><span class="math-container">$\longrightarrow y-1 = -2x +2$</span></p> <p><span class="math-container">$ \longrightarrow y+2x =3$</span></p> <p>This is the required equation of straight line.</p>
3,399,586
<p>Suppose that <span class="math-container">$f$$:$$\mathbb{R}\to\mathbb{R}$</span> is differentiable at every point and that </p> <p><span class="math-container">$$f’(x) = x^2$$</span></p> <p>for all <span class="math-container">$x$</span>. Prove that </p> <p><span class="math-container">$$f(x) = \frac{x^3}{3} + C$$</span></p> <p>where <span class="math-container">$C$</span> is a constant. </p> <p>This has to be done without integrating, I have only been taught differential calculus and this question only assumes knowledge of that.</p> <p>I tried applying mean value theorem and taylor’s approximation but could not come up with the proof. Can someone please provide the solution?</p>
amsmath
487,169
<p>Let <span class="math-container">$g:\mathbb R\to\mathbb R$</span> be differentiable such that <span class="math-container">$g'(x) = 0$</span> for all <span class="math-container">$x\in\mathbb R$</span>. Then the mean value theorem says that for all <span class="math-container">$x,y\in\mathbb R$</span> there exists some <span class="math-container">$\xi$</span> between them such that <span class="math-container">$g(x)-g(y) = g'(\xi)(x-y) = 0$</span>. Hence, <span class="math-container">$g(x)=g(y)$</span> for all <span class="math-container">$x,y\in\mathbb R$</span> and it follows that <span class="math-container">$g$</span> is constant.</p> <p>Now, consider the function <span class="math-container">$\phi(x) = \frac 13 x^3$</span>. Its derivative is <span class="math-container">$\phi'(x) = x^2$</span>. But also your function <span class="math-container">$f : \mathbb R\to\mathbb R$</span> has that derivative. Consider <span class="math-container">$g := f-\phi$</span>. We have <span class="math-container">$g'(x) = f'(x)-\phi'(x) = x^2-x^2 = 0$</span> for all <span class="math-container">$x$</span>, so <span class="math-container">$g$</span> is a constant, <span class="math-container">$g(x) = c$</span>, and thus <span class="math-container">$f(x) = \phi(x) + g(x) = \frac 13x^3 + c$</span>.</p>
1,530,057
<p>At my multivariable calculus class we gave this definition for the limit of a function:</p> <blockquote> <p><em>Definition:</em></p> <p>Let <span class="math-container">$ \mathbb{R}^n \supset A $</span> be a open set , let <span class="math-container">$f:A \to\mathbb{R}^m $</span> be a function, let <span class="math-container">${\bf x_0}$</span> be a point of <span class="math-container">$A$</span> and <span class="math-container">${\bf P}$</span> a point of <span class="math-container">$\mathbb{R}^m$</span>.</p> <p>To say that <span class="math-container">$f$</span> has limit <span class="math-container">$\bf{P}$</span> at <span class="math-container">$ {\bf x_0} \in A$</span>,</p> <p>    is difined to mean</p> <p><span class="math-container">$\forall \, \varepsilon&gt;0$</span>, <span class="math-container">$\exists \, \delta(\varepsilon)=\delta &gt;0 : ( \forall \, {\bf x} \in A: \left\lVert {\bf x} - {\bf x_0} \right\rVert_{\mathbb{R}^n}&lt; \delta \Rightarrow \left\lVert f({\bf x}) - {\bf P} \right\rVert_{\mathbb{R}^m}&lt; \varepsilon )$</span></p> </blockquote> <p>So I have a question. Why the set <span class="math-container">$A$</span> has to be open? It seems that the problem is the way that the <span class="math-container">$ {\bf x}$</span> will approach the <span class="math-container">${\bf x_0}$</span>.</p> <p>What happens when the domain of <span class="math-container">$f$</span> is a closed set or neither open nor closed?</p> <p>Is there another more 'general' definition?</p>
ckoe
290,263
<p>Suppose a random variable $W$ is uniform on $[0,z]$. Then its mean would be $\frac12z$. Now, as you just stated, $Y|X$ is uniform on $[0,x]$. So then the mean is $\frac12x$.</p>
3,429,623
<p>Is the union of <span class="math-container">$\emptyset$</span> with another set, <span class="math-container">$A$</span> say, disjoint? Even though <span class="math-container">$\emptyset \subseteq A$</span>?</p> <p>I would say, yes - vacuously. But some confirmation would be great.</p>
GhostAmarth
721,316
<p>Two sets <span class="math-container">$A, B$</span> are disjoint iff <span class="math-container">$A \cap B = \emptyset$</span>. You have <span class="math-container">$A \cap \emptyset = \emptyset$</span>. Therefore <span class="math-container">$A$</span> and <span class="math-container">$\emptyset$</span> are disjoint.</p>
2,064,380
<p><a href="https://i.stack.imgur.com/lJcu2.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lJcu2.gif" alt="enter image description here"></a></p> <p>In the above question could anyone please explain me what they have done.</p>
Dominik
259,493
<p>Consider the Formula (1) and (2) in the <a href="https://en.wikipedia.org/wiki/Woodbury_matrix_identity" rel="nofollow noreferrer">Woodbury matrix identity</a>. Applied to the block matrix $M = \begin{pmatrix} 1 &amp; u^t V^t \\ V u &amp; VV^t\end{pmatrix}$ they give us two possible ways to calculate the $(1, 1)$-entry of $M^{-1}$:</p> <p>$$(M^{-1})_{1, 1} = (1 - u^t V^t(VV^t)^{-1} Vu)^{-1}$$</p> <p>$$(M^{-1})_{1, 1} = 1 + u^tV^t[VV^t - Vuu^tV^t]^{-1}Vu = 1 + u^tV^t[V(I - uu^t)V^t]^{-1}Vu$$</p> <p>Now this yields your equation:</p> <p>$$\begin{align*} \frac{u^t V^t(VV^t)^{-1} Vu}{1 - u^t V^t(VV^t)^{-1} Vu} &amp;= [1 - u^t V^t(VV^t)^{-1} Vu]^{-1} - 1 = (M^{-1})_{1, 1} - 1 \\ &amp; = u^tV^t[V(I - uu^t)V^t]^{-1}Vu \end{align*}$$</p> <p>(Assuming that all occuring terms are well-defined).</p>
2,064,380
<p><a href="https://i.stack.imgur.com/lJcu2.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lJcu2.gif" alt="enter image description here"></a></p> <p>In the above question could anyone please explain me what they have done.</p>
Arin Chaudhuri
404
<p>This is simple consequence of Sherman-Morrison-Woodbury formula.</p> <p>By a direct application the Sherman-Morrison-Woodbury formula we have $$ \begin{align} {\begin{pmatrix} V(1 - uu^T) V^T \end{pmatrix}}^{-1} &amp;= (VV^T-(Vu)(Vu)^T)^{-1}\\ &amp;= (VV^T)^{-1} + \dfrac{(VV^T)^{-1}(Vu)(Vu)^T(VV^T)^{-1}}{1 - u^TV^T(VV^T)^{-1}Vu} \end{align} $$</p> <p>Multiplying the LHS and RHS above first on the left by $(Vu)^T$ and then on the right by $Vu$ we get $u^TV^T{\begin{pmatrix} V(1 - uu^T) V^T \end{pmatrix}}^{-1}Vu = c + \dfrac{c^2}{1-c}$ where $c = u^TV^T(VV^T)^{-1}Vu.$ </p> <p>This simplifies to $\dfrac{c}{1-c}$ and the answer follows.</p>
139,417
<p>I have a polygon defined by a list of nodes (x,y). I want to cut the polygon by a horizontal line at position y = a and get the new polygon above the position y = a. I am using the RegionIntersect function, but it seems very slow if I want to combine the function with Manipulate function as well. Is there any way to improve my code to get better speed?</p> <pre><code>R2 = Polygon[{{0, 0}, {300, 0}, {300, 500}, {0, 750}}] ; Manipulate[ R1 = ImplicitRegion[{0 &lt;= x &lt;= 300, a &lt;= y &lt;= 700}, {x, y} ]; R3 = RegionIntersection[R1, R2]; RegionPlot[R3], {a, 1, 499}] </code></pre>
matrix42
5,379
<pre><code>R1 = ImplicitRegion[{0&lt;=x&lt;=300,a&lt;=y&lt;=700},{x,y}]; R2 = Polygon[{{0,0},{300,0},{300,500},{0,750}}]; ineq = RegionMember[RegionIntersection[R1,R2],{x,y}]//Simplify//Rest With[{ineq = ineq}, Manipulate[RegionPlot[ineq,{x,0,300},{y,0,700},PerformanceGoal-&gt;"Quality"],{a,1,499}]] </code></pre>
482,102
<p>The problem comes from Alan Pollack's Differential Topology, pg. 5. Suppose that X is a k-dimensional manifold. Show that every point in X has a neighborhood diffeomorphic to all of $\Bbb{R}^k$.</p> <p>I have already shown that $\Bbb{R}^k$ is diffeomorphic to $B_a$ (part (a) of the question) the open ball of radius $a$, though have little to no understanding of how to proceed.</p> <p>Thank you</p> <p>The author defines a k-manifold as a set such that each point possesses a neighborhood diffeomorphic to an open set of $\Bbb{R}^k$</p>
rfauffar
12,158
<p>You already did all the work. If $x\in X$, then there is a neighborhood $U$ of $x$ that is diffeomorphic to an open set of $\mathbb{R}^k$. Take $\phi: U\to V$ to be the chart. Since $V$ is open, there is an open ball $B_a$ centered at $x$ contained in $V$. Take $W=\phi^{-1}(B_a)$. Then $W$ is diffeomorphic to $B_a$, and as you already proved, this is diffeomorphic to $\mathbb{R}^k$.</p>
843,634
<p>I am wondering whether for any two lines $\mathfrak{L}, \mathfrak{L'}$ and any point $\mathfrak{P}$ in $\mathbf{P}^3$ there is a line having nonempty intersection with all of $\mathfrak{L}, \mathfrak{L'}$, $\mathfrak{P}$. I don't really know how to approach this, because I was never taught thinking about such a problem, not even related ones. I think the answer should be no, but have no means to justify it. Perhaps use the Klein representation of lines in 3 space? Could someone also recommend a book where similar problems are solved or at least posed as exercises? Also feel free to give an algebraic geometry perspective, but I don't really know how to approach this with algebraic geometry.</p>
mercio
17,445
<p>You have to make a distinction between (more concrete) functions from $\Bbb Z / p \Bbb Z$ to $\Bbb Z/p \Bbb Z$ and (more abstract) polynomials with coefficients in $\Bbb Z/ p \Bbb Z$. </p> <p>For each polynomial, there is an associated polynomial function. And as you have just discovered, it is NOT true that if the polynomial function is zero, then the polynomial is also zero.</p> <p>On any finite ring $R$, you can make the polynomial $\prod_{x \in R} (X - x)$, whose polynomial function is obviously zero. But this polynomial is not zero !</p>
2,073,230
<p>I thought I'd might use induction, but that seems too hard, then I tried to take the derivative and show that that's positive $\forall$n. But I can't figure out how to do that either, I've tried induction there too.</p>
florence
343,842
<p>Let $\varepsilon = 0.06$. Now, let $$f(x) = \left(1+\frac{\varepsilon}{x}\right)^x$$ so that your sequence is given by $f(n)$. Then $$\log(f) = x\log\left(1+\frac{\varepsilon}{x}\right)$$ Differentiating both sides, $$\frac{f'}{f} = \log\left(1+\frac{\varepsilon}{x}\right)-x\frac{\varepsilon/x^2}{1+\varepsilon/x} = \log\left(1+\frac{\varepsilon}{x}\right)-\frac{\varepsilon/x}{1+\varepsilon/x}$$ For $t\geq 2$ we have $\log(t) &gt; 1/t$, and so the above is $$&gt;\frac{1}{1+\varepsilon/x}-\frac{\varepsilon/x}{1+\varepsilon/x} = \frac{1-\varepsilon/x}{1+\varepsilon/x}$$ Which is positive for $x&gt; \varepsilon$. Since $f&gt;0$, this establishes that $f$ is increasing for $x \geq 2$. You can manually check that $f(2)&gt;f(1)$. </p>
3,088,620
<p>Let <span class="math-container">$M$</span> be a second countable smooth manifold. When I learned about differential geometry, a side note was made about how if <span class="math-container">$E$</span> is a vector bundle, <span class="math-container">$\Gamma(E)$</span> is a <span class="math-container">$C^\infty(M)$</span>-Module that is not free, but projective. I now realized that I have no idea how to prove that!</p> <p>My first attempt was to look for torsion elements, but <span class="math-container">$F_R(X)$</span> (the free <span class="math-container">$\operatorname{\underline{R-Mod}}$</span> over the set <span class="math-container">$X$</span>) having no torsion elements is only satisfied if <span class="math-container">$R$</span> is a domain – which is clearly not the case for <span class="math-container">$C^\infty(M)$</span> (take bump functions with different support).</p> <p>So:</p> <blockquote> <ol> <li>How do you show that <span class="math-container">$\Gamma(E)$</span> is not free as a <span class="math-container">$C^\infty(M)$</span>-Module?</li> <li>What are good tactics to show that a (non-finitely generated) module is not free if the ring <span class="math-container">$R$</span> is not even a domain?</li> </ol> </blockquote>
Eric Wofsey
86,856
<p>First of all, it's not true that <span class="math-container">$\Gamma(E)$</span> is not free. What is true is that <span class="math-container">$\Gamma(E)$</span> <em>might</em> not be free, depending what <span class="math-container">$E$</span> is. Specifically, <span class="math-container">$\Gamma(E)$</span> is free iff the vector bundle <span class="math-container">$E$</span> is trivial.</p> <p>This more specific statement gives us much more of a clue of how to prove it: we can expect that a basis for <span class="math-container">$\Gamma(E)$</span> corresponds to a trivialization of <span class="math-container">$E$</span>. Indeed, if <span class="math-container">$E\cong M\times\mathbb{R}^n$</span> is trivial of rank <span class="math-container">$n$</span>, then <span class="math-container">$\Gamma(E)\cong C^\infty(M)^n$</span>: a section of <span class="math-container">$E$</span> can be identified with a smooth map <span class="math-container">$M\to\mathbb{R}^n$</span>, or <span class="math-container">$n$</span> smooth maps <span class="math-container">$M\to\mathbb{R}$</span>.</p> <p>Conversely, suppose <span class="math-container">$B$</span> is a basis of <span class="math-container">$\Gamma(E)$</span> over <span class="math-container">$C^\infty(M)$</span>, and let <span class="math-container">$m$</span> be the rank of <span class="math-container">$E$</span>. Note that if we evaluate the elements of <span class="math-container">$B$</span> at any point <span class="math-container">$p$</span>, they must span <span class="math-container">$E_p$</span> (otherwise <span class="math-container">$B$</span> could not generate all of <span class="math-container">$C^\infty(M)$</span>), so <span class="math-container">$|B|\geq m$</span>. I claim that <span class="math-container">$|B|=m$</span>. To prove this, fix any point <span class="math-container">$p\in M$</span>. We can find <span class="math-container">$s_1,\dots s_m\in B$</span> such that <span class="math-container">$s_1(p),\dots,s_m(p)$</span> is a basis for <span class="math-container">$E_p$</span>. Then <span class="math-container">$s_1,\dots,s_m$</span> are linearly independent at every point in some neighborhood <span class="math-container">$U$</span> of <span class="math-container">$p$</span>, and so give a local trivialization of <span class="math-container">$E$</span> on <span class="math-container">$U$</span>. In particular, if <span class="math-container">$s\in B\setminus\{s_1,\dots s_m\}$</span> we can write <span class="math-container">$s=\sum_{i=1}^m c_is_i$</span> on <span class="math-container">$U$</span>, for some smooth functions <span class="math-container">$c_i$</span> on <span class="math-container">$U$</span>. Letting <span class="math-container">$\varphi\in C^\infty(M)$</span> be a bump function supported on <span class="math-container">$U$</span>, we then have <span class="math-container">$$\varphi s=\sum_{i=1}^mc_i\varphi s_i$$</span> on all of <span class="math-container">$M$</span>, where <span class="math-container">$c_i\varphi\in C^\infty(M)$</span>. This is a nontrivial relation between elements of <span class="math-container">$B$</span>, contradicting the assumption that they were a basis.</p> <p>Thus no such <span class="math-container">$s$</span> can exist, and <span class="math-container">$B=\{s_1,\dots,s_m\}$</span> has <span class="math-container">$m$</span> elements. It now follows that <span class="math-container">$s_1(q),\dots,s_m(q)$</span> are a basis for <span class="math-container">$E_q$</span> for all <span class="math-container">$q\in M$</span>, so <span class="math-container">$s_1,\dots,s_m$</span> give a trivialization of <span class="math-container">$E$</span>.</p>
4,241
<p>I was preparing for an area exam in analysis and came across a problem in the book <em>Real Analysis</em> by Haaser &amp; Sullivan. From p.34 Q 2.4.3, If the field <em>F</em> is isomorphic to the subset <em>S'</em> of <em>F'</em>, show that <em>S'</em> is a subfield of <em>F'</em>. I would appreciate any hints on how to solve this problem as I'm stuck, but that's not my actual question.</p> <p>I understand that for finite fields this implies that two sets of the same cardinality must have the same field structure, if any exists. The classification of finite fields answers the above question in a constructive manner.</p> <p>What got me curious is the infinite case. Even in the finite case it's surprising to me that the field axioms are so "restrictive", in a sense, that alternate field structures are simply not possible on sets of equal cardinality. I then started looking for examples of fields with characteristic zero while thinking about this problem. I didn't find many. So far, I listed the rationals, algebraic numbers, real numbers, complex numbers and the p-adic fields. What are other examples? Is there an analogous classification for fields of characteristic zero?</p>
Yuval Filmus
1,277
<p>If $F$ is any field, the rational functions over it form a field $F(t)$ with the same characteristic (and cardinality, if $F$ is infinite). This field consists of all rational functions $P(t)/Q(t)$ (considered as equivalence classes, i.e. if $P_1(t) Q_2(t) = P_2(t) Q_1(t)$ then $P_1(t)/Q_1(t)$ and $P_2(t)/Q_2(t)$ are identified).</p> <p>You can also replace polynomials with formal power series to get a different field. And you can iterate the construction or just consider rational functions in several variables.</p>
3,858,962
<p>given a rectangle <span class="math-container">$ABCD$</span> how to construct a triangle such that <span class="math-container">$\triangle X, \triangle Y$</span> and <span class="math-container">$\triangle Z$</span> have equal areas.i dont know where to start. .i tried some algebra with the area of the triangles and used pythagoras theoram to find the sides of the triangle. i just need a hint. here is the picture<a href="https://i.stack.imgur.com/B7iSk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B7iSk.png" alt="enter image description here" /></a></p>
egreg
62,967
<p>No need to use Pythagoras’ theorem. If you call <span class="math-container">$a$</span> and <span class="math-container">$b$</span> the segment parts on the left vertical side (top to bottom), <span class="math-container">$c$</span> and <span class="math-container">$d$</span> the segment parts on the bottom horizontal side (left to right), the conditions on areas are <span class="math-container">\begin{cases} a(c+d)=bc \\[6px] (a+b)d=bc \end{cases}</span> Subtracting the equations yields <span class="math-container">$ac-bd=0$</span>, so <span class="math-container">$d=ac/b$</span>. Substituting in either equation we get <span class="math-container">$$ c(b^2-ab-a^2)=0 $$</span> Since the unknowns are positive, we obtain <span class="math-container">$$ b=\dfrac{1+\sqrt{5}}{2}a=\varphi a $$</span> Therefore also <span class="math-container">$$ c=\varphi d $$</span> You can't say more, because two equations can determine only two unknowns.</p>
1,569,411
<p>Express $log_3(a^2 + \sqrt{b})$ in terms of m and k where $m = log_{3}a$</p> <p>$k = log_{3}b$</p> <p>Given this information I made $a = 3^m$</p> <p>$b = 3^k$</p> <p>Therefore = $log_{3} ((3^m)^2 + (3^k))^{\frac{1}{2}}$</p> <p>= $log_{3} (3^{2m} + 3^{\frac{k}{2}})$</p> <p>I don't know if I'm done or there is still more things I can simplify. Can anyone help please, thanks</p>
Ian
83,396
<p>Every Markov chain on a finite state space has an invariant distribution. As you said, this follows directly from the condition that the rows sum to <span class="math-container">$1$</span>.</p> <p>It is possible for a Markov chain on a finite state space to have multiple invariant distributions. However, the Perron-Frobenius theorem tells us that these can be decomposed into distributions which are concentrated on strongly connected components of the state space. Decomposing the process into strongly connected components results in one or more different Markov chains each of which has a unique invariant distribution.</p> <p>Any transient states of the original chain will not be states in any of these sub-chains, since they are not in any strongly connected component. Confusingly, this means that a chain that is not irreducible is generically not reducible into chains on disjoint subsets of the state space. That's because generically some transient states would have to be in more than one sub-chain, which is because generically one can reach more than one strongly connected component starting from a transient state.</p> <p>As an important special case, an irreducible finite state Markov chain has a unique invariant distribution which assigns positive probability to all states.</p> <p>However many invariant distributions the process has, it can happen that no invariant distribution is approached over time. When the state space is finite, it turns out that this only happens when the chain is &quot;periodic&quot; (meaning that there are states <span class="math-container">$i,j$</span> and an integer <span class="math-container">$n&gt;1$</span> such that all paths from <span class="math-container">$i$</span> to <span class="math-container">$j$</span> have a length which is a multiple of <span class="math-container">$n$</span>). In this case the transition matrix has an eigenvalue which is not <span class="math-container">$1$</span> and has modulus <span class="math-container">$1$</span>. If the corresponding eigenvector contributes to the initial condition, then its contribution does not decay, and no invariant distribution is approached. The classic example of a periodic chain is <span class="math-container">$P=\begin{bmatrix} 0 &amp; 1 \\ 1 &amp; 0 \end{bmatrix}$</span>, but in general it is not essential that the chain is deterministic for it to be periodic.</p> <p>It is possible for a Markov chain on an infinite state space to not have any invariant distributions. This is roughly because probability mass can escape to infinity. A simple example is a Markov chain on <span class="math-container">$\mathbb{Z}$</span> which deterministically moves one unit to the right at every step. Another example is the simple symmetric random walk: it has an invariant <em>measure</em> which is uniform on <span class="math-container">$\mathbb{Z}$</span>, but this measure cannot be normalized.</p>
705,744
<p>Hello everyone. I have a couple questions this time, but I think if I understand how to do this one, I'll understand the others.</p> <p>A particular online banking system uses the following rules for its passwords:<br/> a. Passwords must be 6-8 characters in length<br/> b. Passwords must use only alphabetical and numeric characters, and must have at least one alpha and one numeric character.<br/> c. Letters are case sensitive.</p><p>Under these rules, how many different passwords are possible?</p>
Daslayer
914,615
<p>Permutation formula for ordered with repetition is n^r where n is the number of things to choose from and r is how many we are choosing to form another set. Total possible permutations is 62^8 However the rules state one numeric and one alpha must be used. The largest legal set considering all rules is 52^7 + 10^1. This is where 7 characters chosen are alpha and 1 character is numeric. The general formula to consider for this problem takes in to consideration sets to be used and the positions and order used. n1^r1 + n2^r2<br /> where n1 is the number of the first set of characters to used and r1 is the number of positions those characters can be used in. The second term and any subsequent terms used are for how many items are in the that set n2 and how many positions it can occupy r2</p>
113,446
<p>Suppose a simple equation in Cartesian coordinate: $$ (x^2+ y^2)^{3/2} = x y $$ In polar coordinate the equation becomes $r = \cos(\theta) \sin(\theta)$. When I plot both, the one in polar coordinate has two extra lobes (I plot the polar figure with $\theta \in [0.05 \pi, 1.25 \pi]$ so the "flow" of the curve is clearer).</p> <pre><code>figurePolar = PolarPlot[Sin[θ] Cos[θ], {θ, 0.05 π, 1.25 π}, PlotStyle -&gt; {Blue, Thick}]; figureCartesian = ContourPlot[(Sqrt[x^2 + y^2])^3 == x y, {x, -0.4, 0.4}, {y, -0.4, 0.4}, ContourStyle -&gt; {Green, Dashed}]; GraphicsGrid[{{figurePolar, figureCartesian}}] </code></pre> <p><a href="https://i.stack.imgur.com/ez5CK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ez5CK.png" alt="same function in polar and Cartesian coordinate"></a> The right one is in the Cartesian cooridnate, it is correct since $x y \geq 0$. The extra lobes in the polar (left) figure seem to be caused by Mathematica's use of negative $r$, which is against the mathematical definition. Any thoughts?</p>
Greg Hurst
4,346
<p>You can always impose this constraint with the option <a href="http://reference.wolfram.com/language/ref/RegionFunction.html" rel="nofollow noreferrer"><code>RegionFunction</code></a>:</p> <pre><code>PolarPlot[Sin[θ] Cos[θ], {θ, 0, 2π}] </code></pre> <p><a href="https://i.stack.imgur.com/I44HD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I44HD.png" alt="enter image description here"></a></p> <pre><code>PolarPlot[Sin[θ] Cos[θ], {θ, 0, 2π}, RegionFunction -&gt; Function[{x, y, θ, r}, r &gt; 0]] </code></pre> <p><a href="https://i.stack.imgur.com/myGot.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/myGot.png" alt="enter image description here"></a></p>
349,317
<p>It is well-known fact that integral Dehn surgeries on <span class="math-container">$3$</span>-sphere <span class="math-container">$S^3$</span> are viewed as the result on the boundary of attaching <span class="math-container">$2$</span>-handles <span class="math-container">$B^2 \times B^2$</span> to the <span class="math-container">$4$</span>-ball <span class="math-container">$B^4$</span>.</p> <p>Is there an analogue of rational surgeries relating handle attachment of rational framing? If not, which problem occurs?</p>
Lisa Piccirillo
113,696
<p>To attach a 4-dimensional 2-handle to the 4-ball, one requires an attaching region in <span class="math-container">$S^3=\partial B^4$</span> and a map from the attaching region of the handle (which has a natural parametrization as <span class="math-container">$S^1\times D^2\subset \partial(D^2\times D^2)$</span>) to the attaching region in <span class="math-container">$S^3$</span>. The attaching region in <span class="math-container">$S^3$</span> is determined by specifying a knot <span class="math-container">$K\subset S^3$</span> (and then convention dictates that the attaching region <span class="math-container">$\nu(K)\cong S^1\times D^2$</span> is parametrized by identifying the Seifert longitude <span class="math-container">$\lambda$</span> for <span class="math-container">$K$</span> with <span class="math-container">$S^1\times\{pt\}$</span>). Thus the handle may be attached via any orientation reversing homeomorphism from the <span class="math-container">$S^1\times D^2$</span> in the boundary of the handle to the <span class="math-container">$S^1\times D^2$</span> neighborhood of <span class="math-container">$K$</span>. There are <em>only</em> an integers worth of such maps up to isotopy (see eg Rolfsen <em>Knots and Links</em> 2D4 and 2E5); in particular <span class="math-container">$S^1\times \{pt\}$</span> has to be mapped to <span class="math-container">$\lambda+n\mu$</span> and <span class="math-container">$\{pt\}\times \partial D^2$</span> has to be mapped to <span class="math-container">$\mu$</span>, where <span class="math-container">$\mu$</span> denotes a meridian of <span class="math-container">$K$</span>. </p> <p>The resulting boundary after the handle attachment should be thought of as (the bits of the boundary of the handle that didn't get stuck to anything)<span class="math-container">$\cup$</span>(the bits of the boundary of <span class="math-container">$S^3$</span> that didn't get something stuck to them) . That's <span class="math-container">$(D^2\times S^1) \cup (S^3\smallsetminus\mathring{\nu(K))}$</span>, so the boundary is some Dehn surgery on <span class="math-container">$K$</span>. And we can see which; we had to send <span class="math-container">$\partial D^2\times \{pt\}$</span> to <span class="math-container">$\lambda+n\mu$</span>, so the only surgeries we can obtain are integral. </p>
349,317
<p>It is well-known fact that integral Dehn surgeries on <span class="math-container">$3$</span>-sphere <span class="math-container">$S^3$</span> are viewed as the result on the boundary of attaching <span class="math-container">$2$</span>-handles <span class="math-container">$B^2 \times B^2$</span> to the <span class="math-container">$4$</span>-ball <span class="math-container">$B^4$</span>.</p> <p>Is there an analogue of rational surgeries relating handle attachment of rational framing? If not, which problem occurs?</p>
Marco Golla
13,119
<p>As Lisa points out, 2-handle attachments correspond exactly to integral surgeries. However, a general Dehn surgery corresponds to a <em>sequence</em> of integral surgeries, and hence to <em>multiple</em> 2-handle attachements.</p> <p>This is a bit hard to do without pictures, so I'll just refer you to Section 5.3 in Gompf and Stipsicz's <em>4-manifolds and Kirby calculus</em>, where they explain how to use slam dunks (Figure 5.30) to convert a rational surgery into a sequence of integral surgeries. (Well, technically they do it for lens spaces, in Exercise 5.39, but the idea is completely general.)</p>
1,171,911
<p><img src="https://i.stack.imgur.com/3q5iO.png" alt="Taken from khan academy "> Hi, so this question is taken straight from khan academy help exercises, i know how to do it dynamically meaning using the determinant and the adjugate how i was trying to do it using guass bla bla way with help of RREF but i somehow never managed to find the inverse. my second question would be is there anyway that i can find the whether or not the matrix is invertable without trying to find the determinant i mean also using Gauss bla bla way i use the word bla bla because i dont know what it is actually called :p</p>
RE60K
67,609
<p>bla bla bla bla : $$|{\rm D}|=2$$ blaaa blabla blabla: $${\rm adj\; A}=\left[\begin{matrix}-1&amp;2&amp;1\\0&amp;0&amp;2\\1&amp;0&amp;-1\end{matrix}\right]$$ blah blehblaqa bla: $${\rm A}^{-1}=\left[\begin{matrix}-1/2&amp;1&amp;1/2\\0&amp;0&amp;1\\1/2&amp;0&amp;-1/2\end{matrix}\right]$$</p>
4,469,733
<p>When randomly selecting a kitten for adoption, there is a <span class="math-container">$23 \%$</span> chance of getting a black kitten, a <span class="math-container">$50 \%$</span> chance of getting a tabby kitten, a <span class="math-container">$7 \%$</span> chance of getting a calico kitten, and a <span class="math-container">$20 \%$</span> chance of getting at ginger kitten.</p> <p>Elisa asks the manager to randomly select two kittens. What is the probability that Elisa gets a black kitten or a tabby kitten?</p> <p>My try:</p> <p>The probability that she gets either black or tabby is one minus probability that she gets calico and ginger kitten, so the required answer is <span class="math-container">$1-0.07 \times 0.2=0.986$</span></p> <p>But the answer is <span class="math-container">$0.73$</span>?</p>
Clement C.
75,808
<p>Here is a quick-and-dirty approach:</p> <ol> <li><p>Show that <span class="math-container">$\sup_{t\geq 2} \frac{\log^2 t}{\sqrt{t}} = \frac{16}{e^2}$</span> (achieved at <span class="math-container">$t=e^4$</span>). That can easily be done by differentiating the function <span class="math-container">$t\mapsto \frac{\log^2 t}{\sqrt{t}}$</span>.</p> </li> <li><p>Compute <span class="math-container">$\int_2^\infty \frac{dt}{\sqrt{t}(t-1)} = 2\operatorname{arcsinh} 1$</span></p> </li> <li><p>Observe that <span class="math-container">$\frac{32}{e^2}\operatorname{arcsinh} 1 &lt; 4$</span>.</p> </li> </ol>
1,053,065
<p>I have a function called $P(t)$ that is the number of the population at time $t$. $t$ being in days.</p> <p>We know the growth rate is $P'(t) = 2t + 6$</p> <p>We also know that $P(0) = 100$. How many days till the population doubles?</p> <p>edit: $P(t) = t^2 + 6t$ edit: $P(t) = t^2 + 6t = 200$ edit: $t^2 + 6t - 200 = 0$</p>
lhf
589
<p>The divisors of $n$ occur in pairs $(x,n/x)$. This implies that there is a divisor at most $\sqrt n$. Therefore $d(n)\le 2\sqrt{n}$. Now, $2\sqrt{n}&lt; n/2$ if $n&gt;16$. The case $6 &lt; n\le 16$ is settled by inspection.</p>
1,210,285
<p>Let there be a given function $f \in C([0,1])$, $f(x)&gt;0$; $x\in [0,1]$. Prove </p> <p>$$\lim_{n\to\infty} \sqrt[n]{f\left({1\over n}\right)f\left({2\over n}\right)\cdots f\left({n\over n}\right)}=e^{\int_0^1 \log f(x) \, dx} $$</p> <p>All the questions before this required solving an definite integral without Newton Leibnitz formula, then this came up, can anyone provide help?</p>
kobe
190,421
<p>Your limit is the same as</p> <p>$$\exp\{\lim_{n\to \infty} \frac{1}{n}\left(\log f(1/n) +\log f(2/n) + \cdots + \log f(n/n)\right)\}$$</p> <p>and the limit inside converges to $\int_0^1 \log f(x)\, dx$, since the sum inside is a sequence of Riemann sums of the continuous function $\log f(x)$ over $[0,1]$.</p>
3,585,271
<p>Let <span class="math-container">$V$</span> be an affine variety in <span class="math-container">$K^n$</span> with ideal <span class="math-container">$I=I(V)$</span>, where <span class="math-container">$K$</span> is an algebraically closed field. Let <span class="math-container">$V'$</span> be the variety with defining ideal <span class="math-container">$Radical(I)$</span>. Usually <span class="math-container">$K[x_1,\ldots,x_n]/I$</span> and <span class="math-container">$K[x_1,\ldots,x_n]/Radical(I)$</span> have different Hilbert series. Does <span class="math-container">$V'$</span> consist of several components of <span class="math-container">$V$</span>? Which part of <span class="math-container">$V$</span> is not in <span class="math-container">$V'$</span>? Thank you very much.</p>
Ricardo Buring
23,180
<p>If <span class="math-container">$V$</span> is an affine variety then <span class="math-container">$I(V)$</span> is radical because <span class="math-container">$f(x)^n = 0$</span> implies <span class="math-container">$f(x) = 0$</span> over a field.</p> <p>In particular (which is probably what you intended to ask), <span class="math-container">$I(V)$</span> is not the ideal generated by <em>just any</em> defining polynomials for <span class="math-container">$V$</span>; in general you have to take the radical. For example, if <span class="math-container">$V$</span> is <span class="math-container">$x^2=0$</span> then <span class="math-container">$I(V) = \langle x \rangle$</span>.</p> <p>In this setting, <span class="math-container">$x^2=0$</span> and <span class="math-container">$x=0$</span> are the same as varieties. If you want to see a difference, consider them as schemes.</p>
2,514,418
<p>Let $p$ be a prime and define $A$ = sum of all $1 \leq a &lt; p$ such that $a$ is a quadratic residue modulo $p$, and define $B$ = sum of all $1 \leq b &lt; p$ such that $b$ is a non-residue modulo $p$.</p> <p>Compute $A \pmod{p}$ and $B \pmod{p}$.</p> <p>So I get $A = B \equiv 0 \pmod{p}$, how would I verify this for all primes $p$? I feel like I'm missing something.</p>
James Garrett
457,432
<p>Your proof is wrong, $A$ has to be <em>any</em> square matrix. Let $\lambda \neq 0$ be an eigenvalue of $A$, by definition $$Av=\lambda v,$$ where $v \neq \mathbf{0}$ is a vector. Multiplying by $A^{-1}$ both sides of the equation yields $$A^{-1}Av=A^{-1}\lambda v \iff v=A^{-1}\lambda v \iff \lambda^{-1}v=A^{-1}v.$$ Hence $\lambda^{-1}$ is a eigenvalue of $A^{-1}$. </p>
2,514,418
<p>Let $p$ be a prime and define $A$ = sum of all $1 \leq a &lt; p$ such that $a$ is a quadratic residue modulo $p$, and define $B$ = sum of all $1 \leq b &lt; p$ such that $b$ is a non-residue modulo $p$.</p> <p>Compute $A \pmod{p}$ and $B \pmod{p}$.</p> <p>So I get $A = B \equiv 0 \pmod{p}$, how would I verify this for all primes $p$? I feel like I'm missing something.</p>
Mathemagical
446,771
<p>Since $det(A) \neq 0$, you know all eigenvalues are nonzero since the determinant is the product of the eigenvalues. </p> <p>Now if $\lambda$ is an eigenvalue with eigenvector $v$, then $Av=\lambda v$. Leftmultiplying by $A^{-1}$, you have $v=\lambda A^{-1} v$ or $\frac{1}{\lambda}v= A^{-1} v$ and you are done. </p>
1,384,752
<p>I ran across a problem which has stumped me involving existential quantifiers. Let U, our universe, be the set of all people. Let S(x) be the predicate "x is a student" and I(x) be the predicate "x is intelligent". I want to write the statement "Some students are intelligent" in the correct logical form. I can see 2 possible ways to write it</p> <p>1) There exists an x in U such that ( S(x) AND I(x) )</p> <p>2) There exists an x in U such that ( S(x) implies I(x) )</p> <p>If I draw a Venn diagram, it seems like option 1 must be true, but from this same diagram (where the sets where S(x) is true and I(x) is true intersect), it is also true that there is an x such that if x is in the set where S(x) us true, then x is in the set where I(x) is true. This makes me wonder if these two statements are not logically equivalent, but I have a feeling they are not.</p> <p>Thanks, Matt</p>
Ashwin Ganesan
157,927
<p>The statement "some students are intelligent" can be rephrased as "there is at least one student who is intelligent". So your logical statement 1) is correct. However, 2) is not correct and 2) is not logically equivalent to 1). </p> <p>Recall that "p implies q" is false exactly when p is true but q is still false. But if p is false, then regardless of the truth value of q, "p implies q" is true. Thus, if S(x) is false, then "S(x) implies I(x)" is true. Thus, if there is an x in U such that S(x) is false, then "there is an x in U such that (S(x) implies I(x))" becomes true.</p>
3,172,693
<p>Can anybody help me with this equation? I can't find a way to factorize for finding a value of <span class="math-container">$d$</span> as a function of <span class="math-container">$a$</span>:</p> <p><span class="math-container">$$d^3 - 2\cdot d^2\cdot a^2 + d\cdot a^4 - a^2 = 0$$</span></p> <p>Another form:</p> <p><span class="math-container">$$d=\frac{a^2}{(a^2-d)^2}$$</span></p> <p>Maybe this equation has no solution. I don't know. That equation is out of some calculus involving the golden number.</p> <p>Thx for your help.</p>
Julian Mejia
452,658
<p>If your denominator has a factor of the form <span class="math-container">$(as+b)^n$</span> then to write partial fractions you should write all the powers up to <span class="math-container">$n$</span>, i.e. <span class="math-container">$\frac{A}{as+b}+\frac{B}{(as+b)^2}+\cdots+\frac{Z}{(as+b)^n}$</span>. In the case you showed, you have that <span class="math-container">$s^2$</span> is a factor of the denominator and that's why in partial fractions you should write the terms <span class="math-container">$\frac{A}{s}+\frac{B}{s^2}$</span>.</p>
3,172,693
<p>Can anybody help me with this equation? I can't find a way to factorize for finding a value of <span class="math-container">$d$</span> as a function of <span class="math-container">$a$</span>:</p> <p><span class="math-container">$$d^3 - 2\cdot d^2\cdot a^2 + d\cdot a^4 - a^2 = 0$$</span></p> <p>Another form:</p> <p><span class="math-container">$$d=\frac{a^2}{(a^2-d)^2}$$</span></p> <p>Maybe this equation has no solution. I don't know. That equation is out of some calculus involving the golden number.</p> <p>Thx for your help.</p>
David
119,775
<p>The general result is the following.</p> <blockquote> <p>Suppose that the degree of <span class="math-container">$p(s)$</span> is less than the degree of <span class="math-container">$q(s)$</span>, and that <span class="math-container">$q(s)=q_1(s)q_2(s)$</span> where <span class="math-container">$q_1(s)$</span> and <span class="math-container">$q_2(s)$</span> have no common factor. Then there exist polynomials <span class="math-container">$r_1(s)$</span> and <span class="math-container">$r_2(s)$</span>, with degrees less than <span class="math-container">$q_1(s)$</span> and <span class="math-container">$q_2(s)$</span> respectively, such that <span class="math-container">$$\frac{p(s)}{q(s)}=\frac{r_1(s)}{q_1(s)}+\frac{r_2(s)}{q_2(s)}\ .$$</span></p> </blockquote> <p>In your case the denominator factorises as <span class="math-container">$s^2$</span> times <span class="math-container">$s+2$</span> so you have <span class="math-container">$$\frac1{s^2(s+2)}=\frac{As+B}{s^2}+\frac{C}{s+2}\ .$$</span> It is then usually more convenient (though not obligatory) to split up the first fraction, which gives your answer.</p> <p>Note that you cannot, for the purposes of the above result, regard the denominator as <span class="math-container">$s$</span> times <span class="math-container">$s(s+2)$</span>, because these polynomials do have a common factor.</p>
232,672
<p>Yesterday, I posted a question that was received in a different way than I intended it. I would like to ask it again by adding some context. </p> <p>In ZF one can prove $\not\exists x (\forall y (y\in x)).$ This statement can be read in many ways, such as (1) "there is no set of all sets" (2) "the class of all sets is proper (i.e. is not a set)" etc. and I believe that there is a substantial philosophical difference between (1) and (2). The former suggests that the existential quantifier refers to the actual existence of something intended in a platonic way, while the latter interprets $\exists$ as meaning "it is a set". So, in the second case, I would say that the existential quantifier is a way of singling out things that are sets from things that are not sets, rather than a way to claim actual existence of something. </p> <p>I am a set theorist and I always intended the statement above as (2) because I don't think existential quantification in set theory refers to actual existence. I suspect that also Zermelo intended existential quantifications as a way of singling out sets from things that are not sets, because in its original formulation he introduced "urelements" i.e. objects that are not sets but could be elements of a set. But I am interested in what is the most common interpretation among contemporary set theorists and I have the impression that my colleagues in set theory use (1) more often. </p> <p>So my question is: from the point of view of someone who believes that existential quantifiers in set theory refer to actual existence, does the statement above mean "the class of all sets does not exist"? Does this interpretation appear anywhere in the literature? </p> <p>Thank you in advance. </p>
Thomas Klimpel
20,781
<p>Even so a model of ZFC might be given together with a collection of classes, only the sets should count as real. So if one would modify the collection of classes without modifying the sets, it would still be the same model. The formula $\forall y (y\in X)$ defines a class $X$ which cannot be modified at will, but a formula using impredicative quantification over classes will not define such a fixed class.</p> <p>I feel this is similar to how a manifold in differential geometry can be embedded into a simpler but larger space. The simplest way to describe a manifold like the $n$-sphere might be its obvious embedding into $\mathbb R^{n+1}$, but the embedding is still arbitrary and unimportant. The simplest way to describe a manifold (like a projective space) might also be as equivalence classes of a simpler but larger space. This sort of external description is just as arbitrary and unimportant as the external description provided by an embedding. The models of ZFC provided by the model existence theorem are given as equivalence classes of simple syntactic terms, so this analogy still works.</p>
58,631
<p>I am (partly as an exercise to understand <em>Mathematica</em>) trying to model the response of a damped simple harmonic oscillator to a sinusoidal driving force. I can solve the differential equation with some arbitrarily chosen boundary conditions, and get a nice graph;</p> <pre><code>params = {ν1 -&gt; 1.0, ω1 -&gt; 10.0, F -&gt; 4.0}; system = {D[x1[t], {t, 2}] == -ν1 D[x1[t], t] - ω1^2 x1[t] + F Cos[ω t], x1[0] == 1, x1'[0] == 0}; soln = DSolve[system /. params, x1[t], t][[1]][[1]]; Plot[x1[t] /. soln /. ω -&gt; 8, {t, 1, 20}, Frame -&gt; True, Axes -&gt; False] </code></pre> <p><img src="https://i.stack.imgur.com/tRS4H.png" alt="SHM transients"></p> <p>But I don't care about the transients - I just care about the steady state situation. I tried using Limit to extract this;</p> <pre><code>amp = Table[Max[Limit[x1[t] /. soln, t -&gt; ∞]], {ω, 1, 20, 1}] ListPlot[amp] </code></pre> <p><img src="https://i.stack.imgur.com/df9l9.png" alt="response"></p> <p>Looks a bit peculiar to me. Also, this is incredibly slow, and doesn't work symbolically.</p> <p>I thought I could do something along the lines of forcing it to take as a solution of the DE $a Sin(\omega t + \phi)$;</p> <pre><code>params = {ν1 -&gt; 40.0, ω1 -&gt; 10.0, F -&gt; 10.0}; x1 = a Sin[ω t + ϕ]; system = D[x1, {t, 2}] == -ν1 D[x1, t] - ω1^2 x1 + F Cos[ω t] amp = Solve[system /. params, a] phase = Solve[D[a /. amp, t] == 0, ϕ][[1]][[1]] </code></pre> <p>but this just turns into a mess and doesn't give the right result either.</p> <p>Is there a canonical way to tackle this sort of problem? I don't really know what I'm doing with Mathematica yet, so any explanations would be gratefully received.</p>
Dr. Wolfgang Hintze
16,361
<p>It is much easier and more general.</p> <p>Your equation</p> <pre><code>system = {D[x1[t], {t, 2}] == -ν1 D[x1[t], t] - ω1^2 x1[t] + F Cos[ω t]}; </code></pre> <p>without specifying neither the initial conditions nor the parameters (except of course <code>v1 &gt; 0, ω1^2 &gt; 0</code>) is solved by</p> <pre><code>xx[t_]=x1[t]/. DSolve[System, x1[t], t][[1]]; </code></pre> <p>The behaviour of the solution at large t is simply given by letting die out all eponentially decaying terms.</p> <p>Hence:</p> <pre><code>y[t_] = xx[t] /. Exp[t_] -&gt; 0 </code></pre> <p>$-\frac{4 F \left(-\omega ^2 \text{Cos}[t \omega ]+\text{$\omega $1}^2 \text{Cos}[t \omega ]+\text{$\nu $1} \omega \text{Sin}[t \omega ]\right)}{\left(\text{$\nu $1}^2+2 \omega ^2-2 \text{$\omega $1}^2+\text{$\nu $1} \sqrt{\text{$\nu $1}^2-4 \text{$\omega $1}^2}\right) \left(-\text{$\nu $1}^2-2 \omega ^2+2 \text{$\omega $1}^2+\text{$\nu $1} \sqrt{\text{$\nu $1}^2-4 \text{$\omega $1}^2}\right)}$</p> <p>Hope this helps, Wolfgang</p>
279,985
<p>How can I convert a Beta Distribution to a Gamma Distribution? Strictly speaking, I want to transform parameters of a Beta Distribution to parameters of the corresponding Gamma Distribution. I have mean value, alpha and beta parameters of a Beta Distribution and I want to transform them to those of a Gamma Distribution.</p>
Did
6,179
<p>Let $X_a$ and $X_b$ denote independent gamma random variables with respective parameters $(a,c)$ and $(b,c)$, for some nonzero $c$. Then $\dfrac{X_a}{X_a+X_b}$ is a beta random variable with parameter $(a,b)$.</p>
279,985
<p>How can I convert a Beta Distribution to a Gamma Distribution? Strictly speaking, I want to transform parameters of a Beta Distribution to parameters of the corresponding Gamma Distribution. I have mean value, alpha and beta parameters of a Beta Distribution and I want to transform them to those of a Gamma Distribution.</p>
LALIT SALUNKHE
644,309
<p>You can make a transformation as U = X + Y and V = <code>X/(X+Y)</code> where X and Y both are having gamma distribution with parameters <code>aplha</code>, <code>Beta</code> respectively. Out of these two, U will be Gamma distribution with parameters <code>aplha + beta</code> and V will be a Beta Distribution of first kind with parameter <code>alpha, beta</code>.</p>
2,430,482
<p>I'm struggling to find the maximum of this function $f:\mathbb{R}^n\times\mathbb{R}^n \rightarrow \mathbb{R}$</p> <p>$$ f(x,y) = \frac{n+1}{2} \sum_{i=1}^n x_i\,y_i - \sum_{i=1}^n x_i \sum_{i=1}^n y_i,$$</p> <p>where $x_i,y_i\in[0,1]$ for $i=1,...,n$. It reminded me to Chebyshev's sum inequality, but it didn't help much since I <em>can't</em> sort the variables $x$ and $y$. Any help is welcomed.</p>
zwim
399,263
<p>Let's have $\bar x=\frac 1n\sum\limits_{i=1}^n x_i$ and $\bar y=\frac 1n\sum\limits_{i=1}^n y_i$.</p> <p>$\sum\limits_{i=1}^n x_iy_i=\sum\limits_{i=1}^n (x_i-\bar x)(y_i-\bar y)+\overbrace{\sum\limits_{i=1}^n x_i\bar y}^{n\bar x\bar y}+\overbrace{\sum\limits_{i=1}^n \bar xy_i}^{n\bar x\bar y}-\overbrace{\sum\limits_{i=1}^n \bar x\bar y}^{n\bar x\bar y}=\sum\limits_{i=1}^n (x_i-\bar x)(y_i-\bar y)+n\bar x\bar y$</p> <p>$\sum\limits_{i=1}^n x_i\sum\limits_{i=1}^n y_i=n^2\bar x\bar y$</p> <p>So $f(x,y)=\frac{n+1}2\sum\limits_{i=1}^n (x_i-\bar x)(y_i-\bar y)+\bar x\bar y\overbrace{(\frac{n(n+1)}2-n^2)}^{\frac{-n(n-1)}2}$</p> <p>When fixing the averages, this is maximized when $\sum\limits_{i=1}^n (x_i-\bar x)(y_i-\bar y)$ is maximized. </p> <p>By Cauchy-Schwarz it has a higher bound when $(y_i-\bar y)=\alpha(x_i-\bar x),\alpha&gt;0$ which is $\alpha\sum\limits_{i=1}^n (x_i-\bar x)^2$.</p> <p><em>Note: that we can now apply the Chebyshev inequality since we don't lose in generality by reordering the $x_i$, and the condition above reorders the $y_i$ in the same way.</em></p> <p>By convexity of $x^2$ this is maximized when $x_i=0$ or $x_i=1$ (both possible since some $x_i\le \bar x$ and some $x_i\ge \bar x$ by definition).</p> <p>So intuitively we fall back to kimchi lover solution, where $\alpha=1$ and the $x_i,y_i$ take only $0,1$ values, but I'm not sure if we didn't miss something by fixing $\bar x,\bar y$ like this, if $n$ is large then constraining $\bar x$ to take rationnal values $\frac kn$ is quite lax, but for small $n$ this troubles me...</p>
3,873,071
<p>This is my first post and I apologize in advance if I'm not using the right formatting/approach.</p> <p><strong>Problem</strong></p> <p>A coin, having probability <span class="math-container">$p$</span> of landing heads, is continually flipped until at least one head and one tail have been flipped.</p> <p>Find the expected number of flips needed.</p> <p>typical examples: “HT”, X = 2; “TTTTH”, X = 5.</p> <p><strong>Solution Begin</strong></p> <p>Denote X: # of flips needed. Y: outcome of 1st flip.</p> <p><span class="math-container">$$\operatorname E[X] = \operatorname E[X\mid Y = H]P(Y = H) + \operatorname E[X\mid Y = T]P(Y = T)$$</span></p> <p><span class="math-container">$$E[X\mid Y = H] = 1 + \operatorname E[\text{additional flips needed}] = 1 + \frac1{1-p}$$</span></p> <p><strong>Question</strong></p> <p>This is regarding <span class="math-container">$$1 + \frac1{1-p}$$</span></p> <p>I understand that <span class="math-container">$1$</span> is for the failed trial but why is the <span class="math-container">$1/(1-p)$</span> there? Given the conditional probability/expectation, I thought the denominator would be the <span class="math-container">$P(Y=H)$</span> which is <span class="math-container">$p.$</span> I just don't understand the overall reason for <span class="math-container">$1/(1-p).$</span> Could someone help me understand or point me in the right direction?</p>
Quanto
686,284
<p>Note</p> <p><span class="math-container">$$\frac2{x^8+1}=\frac1{x^4+1}\left( \frac1{x^4+\sqrt2x^2+1}+ \frac1{x^4-\sqrt2x^2+1}\right) $$</span> Then <span class="math-container">\begin{align} &amp;\int\frac{(x^4-1)\sqrt{x^4+1}}{x^8+1} \&gt; dx \\ =&amp; \frac12 \int\frac{\frac{x^4-1}{\sqrt{x^4+1}}dx}{x^4+\sqrt2x^2+1} +\frac12 \int\frac{\frac{x^4-1}{\sqrt{x^4+1}}dx}{x^4-\sqrt2x^2+1}\\ =&amp; \frac12\int\frac{d\sqrt{ x^2+\frac1{x^2}}}{x^2+\frac1{x^2}+\sqrt2} +\frac12 \int\frac{d\sqrt{ x^2+\frac1{x^2}}}{x^2+\frac1{x^2}-\sqrt2}\\ =&amp; \frac1{2\sqrt[4]2}\tan^{-1} \frac{\sqrt{ x^2+\frac1{x^2}} }{\sqrt[4]2 } - \frac1{2\sqrt[4]2}\coth^{-1} \frac{\sqrt{ x^2+\frac1{x^2}} }{\sqrt[4]2 }+C \end{align}</span></p>
470,617
<ol> <li><p>Two competitors won $n$ votes each. How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other?</p></li> <li><p>One competitor won $a$ votes, and the other won $b$ votes. $a&gt;b$. How many ways are there to count the votes, in a way that the first competitor is always ahead of the other? (They can have the same amount of votes along the way)</p></li> </ol> <p>I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid.</p> <p>I am unsure about how to go about solving the second problem.</p>
lab bhattacharjee
33,337
<p>From <a href="https://web.archive.org/web/20180211011715/http://mathforum.org/library/drmath/view/54058.html" rel="nofollow noreferrer">this</a> and <a href="https://mathworld.wolfram.com/TrigonometryAnglesPi7.html" rel="nofollow noreferrer">this</a>, <span class="math-container">$\sin7x=7t-56t^3+112t^5-64t^7$</span> where <span class="math-container">$t=\sin x$</span></p> <p>Now, if <span class="math-container">$\sin7x=0, 7x=n\pi, x=\frac{n\pi}7$</span> where <span class="math-container">$n=0,1,2,3,4,5,6$</span></p> <p>Clearly, <span class="math-container">$\sin\frac{r\pi}7$</span> are the roots of <span class="math-container">$7-56t^2+112t^4-64t^6=0$</span> where <span class="math-container">$r=1,2,3,4,5,6$</span></p> <p>As <span class="math-container">$\sin\frac{(7-r)\pi}7=\sin (\pi-\frac{r\pi}7)=\sin\frac{r\pi}7,$</span></p> <p><span class="math-container">$\sin^2\frac{r\pi}7$</span> are the roots of <span class="math-container">$7-56s+112s^2-64s^3=0$</span> where <span class="math-container">$r=1,2,4$</span></p> <p>Putting <span class="math-container">$y=\frac1s,$</span> <span class="math-container">$\displaystyle7-\frac{56}y+\frac{112}{y^2}-\frac{64}{y^3}=0$</span></p> <p><span class="math-container">$\displaystyle\implies 7y^3-56y^2+112y-64=0$</span></p> <p>Now, using <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow noreferrer">Vieta's Formula</a>, <span class="math-container">$\displaystyle \frac1{\sin^2\frac{\pi}7}+\frac1{\sin^2\frac{2\pi}7}+\frac1{\sin^2\frac{4\pi}7}=\frac{56}7=8$</span></p>
42,957
<p>I am an "old" programmer used to <em>Fortran</em> and <em>Pascal</em>. I can't get rid of <code>For</code>, <code>Do</code> and <code>While</code> loops, but I know <em>Mathematica</em> can do things much faster!</p> <p>I am using the following code</p> <pre><code>SeedRandom[3] n = 10; v1 = Range[n]; v2 = RandomReal[250., n]; a = {}; Do[ Do[ AppendTo[a, (v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]]) ], {j, i - 1, 1, -1}], {i, n, 2, -1} ]; // Timing </code></pre> <p>If <code>n</code> is small, it runs fast enough, but for bigger <code>n</code> it slows down. I usually deal with <code>n &gt; 600</code>.</p> <p>How can the code be made faster?</p>
lalmei
9,831
<p>Here is an example using Reap and Sow.</p> <p>It is a bit overkill in this case, but if you need to append only some of the values it should go much faster with Reap and Sow, or if you need to perform other functions while appending. </p> <pre><code>Last@Reap[ Scan[Function[{x}, Sow[(v2[[x[[1]]]] - v2[[x[[2]]]])/(v1[[x[[1]]]] -v1[[x[[2]]]])]; ], Table[{i, j}, {i, n, 2, -1}, {j, i - 1, 1, -1}], {2}]]//Timing </code></pre> <blockquote> <p>{0.00062, {{124.857, 17.9221, 27.095, 23.4391, 36.1562, 31.8519, 19.882, 27.9697, 11.8103, -89.0129, -21.7861, -10.3669, 13.9809, 13.2509, 2.38619, 14.1286, -2.3205, 45.4407, 28.956, 48.3122, 38.8168, 20.666, 31.3189, 10.0641, 12.4714, 49.748, 36.6089, 14.4723, 28.4945, 4.16804, 87.0246, 48.6777, 15.1393, 32.5003, 2.50737, 10.3307, -20.8033, 14.3255, -18.6219, -51.9374, 16.3229, -28.2728, 84.5831, -16.4406, -117.464}}}</p> </blockquote>
42,957
<p>I am an "old" programmer used to <em>Fortran</em> and <em>Pascal</em>. I can't get rid of <code>For</code>, <code>Do</code> and <code>While</code> loops, but I know <em>Mathematica</em> can do things much faster!</p> <p>I am using the following code</p> <pre><code>SeedRandom[3] n = 10; v1 = Range[n]; v2 = RandomReal[250., n]; a = {}; Do[ Do[ AppendTo[a, (v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]]) ], {j, i - 1, 1, -1}], {i, n, 2, -1} ]; // Timing </code></pre> <p>If <code>n</code> is small, it runs fast enough, but for bigger <code>n</code> it slows down. I usually deal with <code>n &gt; 600</code>.</p> <p>How can the code be made faster?</p>
Jacob Akkerboom
4,330
<p>As mentioned, the main issue is using <code>AppendTo</code> in loops like this. In this answer, I want to show that using <code>Compile</code> can make procedural code very fast. Below is a comparison of timings of all the answers, as well as the OPs code.</p> <p>Here is a slight modification of the code by the OP. I have modified it because I wanted to focus on <code>AppendTo</code>, so I have cleaned it up a bit.</p> <pre><code>questionCode := ( qCRes = {}; Do[AppendTo[qCRes, (v[[i]] - v[[j]])/(i - j)], {i, n, 2, -1}, {j, i - 1, 1, -1}] ); </code></pre> <p><strong>Alternatives</strong></p> <p>My code uses a compiled function, where code that is similar to that of the OP is compiled to C. If you do not have a C compiler, simply remove <code>CompilationTarget-&gt;"C"</code>. My code also uses undocumented functions, notably Internal'Bag. This is basically an implementation of a linked list structure, which is especially useful inside <code>Compile</code>.</p> <pre><code>jacobCfu = Compile[ {{v, _Real, 1}}, Block[ {result, n}, result = Internal`Bag[]; n = Length@v; Do[Internal`StuffBag[result, (v[[i]] - v[[j]])/(i - j)], {i, n, 2, -1}, {j, i - 1, 1, -1}]; Internal`BagPart[result, All]], CompilationTarget -&gt; "C" ]; jacobCode := ( jacobRes = jacobCfu[v] ); </code></pre> <p>Other answerers' code. This includes a slightly modified version of Simon Woods (SW) code I made (<code>sWCodeE</code>)</p> <pre><code>rasherCode1 := ( rasherRes1 = Table[(v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]]), {i, n, 2, -1}, {j, i - 1, 1, -1}] // Flatten ); rasherCode2 := ( rasherRes2 = Block[ {s, i1, i2}, s = Subsets[Range[n, 1, -1], {2}]; {i1, i2} = {s[[All, 1]], s[[All, 2]]}; Divide[Subtract[v2[[i1]], v2[[i2]]], i1 - i2] ] ); wRCode := ( wRRes = Flatten[Table[(v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]]), {i, n, 2, -1}, {j, i - 1, 1, -1}]] ); sWCode1 := (sWRes1 = With[{f = Subtract @@@ Subsets[Reverse@#, {2}] &amp;}, f[v2]/f[v1]]); sWCode2 := ( sWRes2 = Block[ {ii, jj}, ii = Join @@ Table[ConstantArray[i, i - 1], {i, n, 2, -1}]; jj = Join @@ Table[Range[j, 1, -1], {j, n - 1, 1, -1}]; Divide[Subtract[v2[[ii]], v2[[jj]]], Subtract[v1[[ii]], v1[[jj]]]] ] ); sWCodeE := ( sWResE = Block[ {ii, jj}, ii = Join @@ Table[ConstantArray[i, i - 1], {i, n, 2, -1}]; jj = Join @@ Table[Range[j, 1, -1], {j, n - 1, 1, -1}]; Divide[Subtract[v2[[ii]], v2[[jj]]], Subtract[ii, jj]] ] ); lalmeiCode := ( (lalmeiRes = First@Last@ Reap[Scan[ Function[{x}, Sow[(v2[[x[[1]]]] - v2[[x[[2]]]])/(v1[[x[[1]]]] - v1[[x[[2]]]])];], Table[{i, j}, {i, n, 2, -1}, {j, i - 1, 1, -1}], {2}]]) ) </code></pre> <p><strong>Timing comparison functions</strong></p> <pre><code>timing = Function[Null, First@Timing@#, HoldAll]; timingAndName = Function[Null, {timing@#, ToString@Unevaluated@#}, HoldAll]; timingsAndNamesTable = Function[Null, TableForm[timingAndName /@ Unevaluated[{##}]], HoldAll]; formattedTimingsAndComparison = Function[Null, Block[{timingTable}, timingTable = timingsAndNamesTable@##; Column[ { StringForm["Comparison for n = ``", n] , timingTable , If[ SameQ @@ resultNames , "results are equal" , Row[{"results are ", Style["not ", Bold], "equal"}] ] } , Spacings -&gt; 2 ] ] , HoldAll ]; </code></pre> <p><strong>Initialisation</strong> </p> <pre><code>initialize[nn_] := ( SeedRandom[3]; n = nn; v = RandomReal[250., n]; v1 = Range[n]; v2 = v; ) </code></pre> <p><strong>Timing comparison</strong></p> <pre><code>initialize[200]; resultNames = Hold[qCRes, jacobRes, sWResE, sWRes2, wRRes, rasherRes1, rasherRes1, rasherRes2, lalmeiRes]; formattedTimingsAndComparison[ questionCode, jacobCode, sWCodeE, sWCode2, rasherCode2, wRCode, rasherCode1, lalmeiCode ] </code></pre> <p>Gives</p> <blockquote> <p>Comparison for n = 200 <br></p> <pre><code> 1.168769 questionCode 0.000674 jacobCode 0.001093 sWCodeE 0.001200 sWCode2 0.004740 rasherCode2 0.070625 wRCode 0.069372 rasherCode1 0.176819 lalmeiCode </code></pre> <p><br> results are equal</p> </blockquote> <p>Let's look at a larger value of <code>n</code> as well</p> <pre><code>initialize[1000] resultNames = Hold[jacobRes, sWResE, sWRes2, rasherRes2]; formattedTimingsAndComparison[ jacobCode, sWCodeE, sWCode2, rasherCode2 ] </code></pre> <p>Gives</p> <blockquote> <p>Comparison for n = 1000 <br></p> <pre><code>0.020356 jacobCode 0.052107 sWCodeE 0.110584 sWCode2 0.526762 rasherCode2 </code></pre> <p><br> results are equal</p> </blockquote>
441,888
<p>I should clarify that I'm asking for intuition or informal explanations. I'm starting math and never took set theory so far, thence I'm not asking about formal set theory or an abstract hard answer. </p> <p>From Gary Chartrand page 216 Mathematical Proofs - </p> <p>$\begin{align} \text{ range of } f &amp; = \{f(x) : x \in domf\} = \{b : (a, b) \in f \} \\ &amp; = \{b ∈ B : b \text{ is an image under $f$ of some element of } A\} \end{align}$</p> <p><a href="http://en.wikipedia.org/wiki/Parity_%28mathematics%29" rel="nofollow noreferrer">Wikipedia</a> - $\begin{align}\quad \{\text{odd numbers}\} &amp; = \{n \in \mathbb{N} \; : \; \exists k \in \mathbb{N} \; : \; n = 2k+1 \} \\ &amp; = \{2n + 1 :n \in \mathbb{Z}\} \end{align}$</p> <p>But <a href="https://math.stackexchange.com/questions/266718/quotient-group-g-g-identity/266725#266725">Why $G/G = \{gG : g \in G \} \quad ? \quad$ And not $\{g \in G : gG\} ?$</a></p> <p><strong>EDIT @Hurkyl 10/5.</strong> Lots of detail please.</p> <p>Question 1. Hurkyl wrote $\{\text{odd numbers}\}$ in two ways.<br> But can you always rewrite $\color{green}{\{ \, x \in S: P(x) \,\}}$ with $x \in S$ on the right of the colon? How?<br> $ \{ \, x \in S: P(x) \,\} = \{ \, \color{red}{\text{ What has to go here}} : x \in S \, \} $? Is $ \color{red}{\text{ What has to go here}} $ unique?</p> <p>Qusetion 2. Axiom of replacement --- Why $\{ f(x) \mid x \in S \}$ ? NOT $\color{green}{\{ \; x \in S \mid f(x) \; \}}$ ?</p> <p><strong>@HTFB.</strong> Can you please simplify your answer? I don't know what are ZF, extensionality, Fraenkel's, many-one, class function, Cantor's arithmetic of infinities, and the like. </p>
Community
-1
<p>Consult the answer at <a href="https://math.stackexchange.com/a/149985/53259">https://math.stackexchange.com/a/149985/53259</a>. It may help you. In brief, according to that answer:</p> <p>$\{x \in S : P(x) \} $ can be interpreted as the "elementhood test." This is more convenient for testing whether some $x \in S$ passes or fails $P(x)$. However, this may not help with listing all the elements in this set, because $P(x)$ may not be easily solvable for $x$. </p> <p>$\{P(x) : x \in S \} $ is convenient for listing all the elements, but NOT for testing whether some $x \in S$ passes or fails $P(x)$. </p> <p>For example, $P(x)$ could be a messy polynomial that cannot be solved handily. </p>
664,349
<blockquote> <p>If $G$ is a finite group where every non-identity element is generator of $G$, what is the order of $G$?</p> </blockquote> <p>I know that the order of $G$ must be prime, but I'm not sure how to go about proving this from the problem statement. </p> <p>Any hints on where to start?</p>
TheMobiusLoops
100,798
<p>Suppose the order of $G$ was not prime and let $n$ be the order of $G$. Then for all $k\in \mathbb{Z}$ which divide $n$, the subgroup generated by $g^k$ has only $n/k$ elements while $G$ has $n$ elements. </p> <p>Therefore, the subgroup generated by $g^k$ cannot equal $G$.</p> <p>Therefore, the order of $G$ must be prime.</p> <hr> <p>Should my proof go something like this?</p>
2,445,693
<p>I know that the derivative of $n^x$ is $n^x\times\ln n$ so i tried to show that with the definition of derivative:$$f'\left(x\right)=\dfrac{df}{dx}\left[n^x\right]\text{ for }n\in\mathbb{R}\\{=\lim_{h\rightarrow0}\dfrac{f\left(x+h\right)-f\left(x\right)}{h}}{=\lim_{h\rightarrow0}\frac{n^{x+h}-n^x}{h}}{=\lim_{h\rightarrow0}\frac{n^x\left(n^h-1\right)}{h}}{=n^x\lim_{h\rightarrow0}\frac{n^h-1}{h}}$$ now I can calculate the limit, lets:$$g\left(h\right)=\frac{n^h-1}{h}$$ $$g\left(0\right)=\frac{n^0-1}{0}=\frac{0}{0}$$$$\therefore g(0)=\frac{\dfrac{d}{dh}\left[n^h-1\right]}{\dfrac{d}{dh}\left[h\right]}=\frac{\dfrac{df\left(0\right)}{dh}\left[n^h\right]}{1}=\dfrac{df\left(0\right)}{dh}\left[n^h\right]$$ so in the end i get: $$\dfrac{df}{dx}\left[n^x\right]=n^x\dfrac{df\left(0\right)}{dx}\left[n^x\right]$$ so my question is how can i prove that $$\dfrac{df\left(0\right)}{dx}\left[n^x\right]=\ln n$$</p> <h1>edit:</h1> <p>i got 2 answers that show that using the fact that $\lim_{z \rightarrow 0}\dfrac{e^z-1}{z}=1$, so how can i prove that using the other definitions of e, i know it is definition but how can i show that this e is equal to the e of $\sum_{n=0}^\infty \frac{1}{n!}$?</p>
William Kurdahl
217,928
<p>It depends on what you feel you can assume about the function ln(x) and the number e.</p> <p>See the link below for an approach similar to yours: <a href="http://tutorial.math.lamar.edu/Classes/CalcI/DiffExpLogFcns.aspx" rel="nofollow noreferrer">http://tutorial.math.lamar.edu/Classes/CalcI/DiffExpLogFcns.aspx</a></p>
1,455,969
<p><a href="https://i.stack.imgur.com/5O0d8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5O0d8.png" alt="enter image description here"></a></p> <p>Hello! I'm having problems trying to figure out this. Here is what I did: I used implication relation and Demorgan's law to simplify this proposition. I then used associative and commutative laws because the operators were are disjuntions and conjunctions. </p> <p>The picture below is basically proof of my attempt at trying to solve this. There is no need to follow along and attempt to correct my work. I fear that that may be too time consuming considering how rough my work is. Hints/answer is much appreciated. </p> <p>EDIT: Once again, no need to correct my wrong. It's really rough. I'll keep it much neater next time. </p> <p><a href="https://i.stack.imgur.com/F5a90.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F5a90.jpg" alt="enter image description here"></a></p>
Daniel W. Farlow
191,378
<p>The key here is largely going to be patience, but one point is worth remembering: always try to turn connectives into only $\lor, \land,$ and $\neg$. Then everything often "falls out" more or less. Since your compound proposition is quite large, start by writing out and denoting it $\text{LHS}$ like so:</p> <blockquote> <p>$$\text{LHS}\equiv\Bigl\{\neg(r\to p)\lor\bigl[(\neg q\to\neg p)\land(r\to q)\bigr]\Bigr\}\to(\neg p\lor q)\tag{1}$$</p> </blockquote> <p>Now, the goal is to show that $(1)$ is a tautology. To that end, let $\textbf{T}$ symbolize this as $\text{RHS}\equiv\textbf{T}$. That is, you want to show that $\text{LHS}\equiv\text{RHS}$. That being said, see if you can follow the argument outlined below: \begin{align} \text{LHS} &amp;\equiv\Bigl\{\neg(r\to p)\lor\bigl[(\neg q\to\neg p)\land(r\to q)\bigr]\Bigr\}\to(\neg p\lor q)\tag{by definition}\\[0.5em] &amp;\equiv \Bigl\{(r\to p)\land\bigl[\neg(\neg q\to\neg p)\lor\neg(r\to q)\bigr]\Bigr\}\lor(\neg p\lor q)\tag{$\substack{\text{DeMorgan &amp;}}\\\eta\to\phi\equiv\neg\eta\lor\phi$}\\[0.5em] &amp;\equiv \Bigl\{(\neg r\lor p)\land\bigl[(\neg q\land p)\lor(r\land\neg q)\bigr]\Bigr\}\lor(\neg p\lor q)\tag{$\substack{\text{DeMorgan &amp;}}\\\eta\to\phi\equiv\neg\eta\lor\phi$}\\[0.5em] &amp;\equiv \bigl[(\neg r\lor p)\lor(\neg p\lor q)\bigr]\land\Bigl\{(\neg q\land p)\lor(r\land\neg q)\lor(\neg p\lor q)\Bigr\}\tag{distributivity}\\[0.5em] &amp;\equiv\bigl[(\neg p\lor p)\lor(\neg r\lor q)\bigr]\land\Bigl\{(\neg q\land p)\lor(r\land\neg q)\lor(\neg p\lor q)\Bigr\}\tag{associativity}\\[0.5em] &amp;\equiv \textbf{T}\land\Bigl\{(\neg q\land p)\lor(r\land\neg q)\lor(\neg p\lor q)\Bigr\}\tag{neg. &amp; dom.}\\[0.5em] &amp;\equiv (\neg q\land p)\lor(r\land\neg q)\lor(\neg p\lor q)\tag{identity}\\[0.5em] &amp;\equiv [\neg q\land(p\lor r)]\lor(\neg p\lor q)\tag{distributivity}\\[0.5em] &amp;\equiv [\neg q\lor(\neg p\lor q)]\land[(p\lor r)\lor(\neg p\lor q)]\tag{distributivity}\\[0.5em] &amp;\equiv [(\neg q\lor q)\lor\neg p]\land[(p\lor\neg p)\lor(r\lor q)]\tag{associativity}\\[0.5em] &amp;\equiv [\textbf{T}\lor\neg p]\land[\textbf{T}\lor(r\lor q)]\tag{negation}\\[0.5em] &amp;\equiv \textbf{T}\land\textbf{T}\tag{domination}\\[0.5em] &amp;\equiv \textbf{T}\tag{identity}\\[0.5em] &amp;\equiv \text{RHS} \end{align}</p>
418,748
<p>I tried to calculate, but couldn't get out of this: $$\lim_{x\to1}\frac{x^2+5}{x^2 (\sqrt{x^2 +3}+2)-\sqrt{x^2 +3}}$$</p> <p>then multiply by the conjugate.</p> <p>$$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ </p> <p>Thanks!</p>
Community
-1
<p>Let $t=x-1$ so $x=t+1$ and since $(1+y)^\frac{1}{2}\sim_0 1+\frac{y}{2}$ and $y^2=_0o(y)$ then we find $$\lim_{x\to 1}\frac{\sqrt{x^2+3}-2}{x^2-1}=\lim_{t\to 0}\frac{\sqrt{t^2+2t+4}-2}{t^2+2t}=\lim_{t\to 0}2\frac{\sqrt{\frac{t^2+2t}{4}+1}-1}{t^2+2t}=\lim_{t\to 0}2\frac{t/4}{2t}=\frac{1}{4}$$</p>
1,689,923
<p>I have a sequence $a_{n} = \binom{2n}{n}$ and I need to check whether this sequence converges to a limit without finding the limit itself. Now I tried to calculate $a_{n+1}$ but it doesn't get me anywhere. I think I can show somehow that $a_{n}$ is always increasing and that it has no upper bound, but I'm not sure if that's the right way</p>
Stefan Mesken
217,623
<p><strong>Hint</strong> For $n \ge 1$ we always have $\binom{2n}{n} &gt; n$.</p>
1,843,274
<p>Good evening to everyone. So I have this inequality: $$\frac{\left(1-x\right)}{x^2+x} &lt;0 $$ It becomes $$ \frac{\left(1-x\right)}{x^2+x} &lt;0 \rightarrow \left(1-x\right)\left(x^2+x\right)&lt;0 \rightarrow x^3-x&gt;0 \rightarrow x\left(x^2-1\right)&gt;0 $$ Therefore from the first $ x&gt;0 $, from the second $ x_1 = 1 $ and $x_2=-1$ therefore $ x $ belongs to $(-\infty,-1)$ and $(1,\infty)$ therefore $x$ belongs to $(1,\infty)$. But on the answer sheet it shows that it's defined on $(-1,0)$ and $(1,\infty)$. Where I am wrong? Thanks for any response.</p>
DanielWainfleet
254,665
<p>Your attempt is ok up to $x(x^2-1)&gt;0.$ The next sentence is unintelligible ("from the first $x&gt;0$". First what? And what are $x_1, x_2$?). </p> <p>Observe that $x(x^2-1)&gt;0\iff x^3&gt;x $ $\iff [(x&gt;0\land x^2&gt;1)\lor (x&lt;0\land x^2&lt;1)\lor (x=0\land PigsCanFly)]$ $\iff (x&gt;1\lor -1&lt;x&lt;0).$ </p> <p>And if you wrote any 1-way implications, you now have to now check whether $(x&gt;1\lor -1&lt;x&lt;0)\implies (1-x)/(x^2+x)&lt;0.$ You can avoid the need for this check by writing $$(1-x)/(x^2+x)&lt;0\iff ((1-x)(x^2+x)&lt;0 \iff x^3&gt;x\iff x\in \{-1,0\}\cup \{1,\infty)$$ and with the additional steps, between " $x^3&gt;x$" and the last formula, preceded and followed by valid " $\iff$ ". </p>
4,489,675
<p>When saying that in a small time interval <span class="math-container">$dt$</span>, the velocity has changed by <span class="math-container">$d\vec v$</span>, and so the acceleration <span class="math-container">$\vec a$</span> is <span class="math-container">$d\vec v/dt$</span>, are we not assuming that <span class="math-container">$\vec a$</span> is constant in that small interval <span class="math-container">$dt$</span>, otherwise considering a change in acceleration <span class="math-container">$d\vec a$</span>, the expression should have been <span class="math-container">$\vec a = \frac{d\vec v}{dt} - \frac{d\vec a}{2}$</span> (Again assuming rate of change of acceleration is constant). According to that argument, I can say that <span class="math-container">$\vec v$</span> is also constant in that time interval and so <span class="math-container">$\vec a = \vec 0$</span>.</p> <p>Can someone point out where exactly I have gone wrong. Also this was just an example, my question is general.</p>
Community
-1
<p>In your suggested answer, da/dt is the ratio of two infinitesimals, so it can be finite and non-zero. However, da/2 is an infinitesimal so you can treat it as being zero when compared to the first term.</p> <p>(If there was infinite acceleration in that moment, it could be an exception, but we normally assume acceleration is finite.)</p>
299,452
<p>According to wiki: <a href="https://en.wikipedia.org/wiki/Dedekind_eta_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Dedekind_eta_function</a>, Dedekind eta function is defined in many equivalent forms. But none of them is an explicit description (say in algorithmic format) on how to computing it. Where to find such one? Thanks!</p>
Licheng Wang
103,866
<p>My maple code of Gatteschi-Sokal algorithm for computing $R(t,x)=\prod_{n=1}^{\infty}(1-tx^n)$: </p> <p>GS:=proc(t,x,prec) </p> <p>local R0, a0, b0, Rn, an, bn, d, c, i, N, r, Rd; </p> <p>N := 100; </p> <p>d := 1/2; if d = t*x then d := (1/2)*d end if; </p> <p>r := evalf$[prec]$(1+d/(1-x)); </p> <p>a0 := 1; b0 := evalf$[prec]$(d/(d-t*x)); </p> <p>R0 := evalf$[prec]$(r*a0+(1-r)*b0); </p> <p>i := 0; </p> <p>while i &lt; N do </p> <p>c := evalf$[prec]$(a0*(d*a0+(1.0-d)*b0)); </p> <p>an := evalf$[prec]$(c/b0); </p> <p>bn := evalf$[prec]$(c/(x*a0+(1.0-x)*b0)); </p> <p>Rn := evalf$[prec]$(r*an+(1.0-r)*bn); </p> <p>Rd := evalf$[prec]$(abs(Rn-R0)); </p> <p>if Rd &lt; 10^(-prec) then i := N else </p> <p>a0 := an; b0 := bn; R0 := Rn; i := i+1 end if; </p> <p>end do; </p> <p>return Rn; </p> <p>end proc;</p> <p>eta:=(t,prec)->evalf$[prec]$(GS(1,exp($2*Pi*I*t$),prec));</p> <p>Dedekind_eta := (t, prec)-> evalf$[prec]$(exp($1/12*Pi*I*t$)*eta(t, prec));</p> <hr> <p>Test:</p> <p>t:=0.3*I:</p> <p>eta(t,40); Dedekind_eta(t,40);</p> <p>0.8251926470787677741036466781518992636742</p> <p>0.7628619270903183863013294748250092216042</p> <p>Almost same with PARI/GP output: </p> <p>eta(0.3*I,0)=0.82519264707876777410364667815189926367</p> <p>eta(0.3*I,1)=0.76286192709031838630132947482500922160</p>
1,199,912
<p>What is the optimal (i.e., smallest) constant $\alpha$ such that, given 19 points on a solid, regular hexagon with side 1, there will always be 2 points with distance at most $\alpha$?</p> <p>This is a reformulation of an <a href="https://math.stackexchange.com/questions/1196787/pigeonhole-problem-about-distance-between-distinct-points-on-a-hexagon#comment2436030_1196787">interesting question</a> that was mercilessly downvoted.</p> <p>I can show the bounds $1/2 \leq \alpha\leq 1/\sqrt{3}$. </p> <p>To show the upper bound, divide the hexagon into 6 regular triangles with sides 1 and note that one of them must contain 4 points.</p> <p>To show the lower bound, divide the hexagon into 24 regular triangles with sides 1/2 and draw a point at each of the 19 corners. </p> <p><strong>Addition.</strong> Here's an <em>idea:</em> Let $\Omega$ be a subset of the plane obtained by gluing together regular, side-1 triangles side-to-side. Let $n$ be the total number of corners. Then the <em>only</em> way of placing $n$ or $n-1$ points on $\Omega$ such that no 2 points are closer than 1 is by placing the points at the corners. Proof: Induction on the number of triangles.</p>
Anders Kaseorg
38,671
<p>(From my <a href="https://www.quora.com/Nineteen-darts-are-thrown-onto-a-dartboard-in-the-shape-of-a-regular-hexagon-with-side-length-one-foot-How-do-I-show-that-there-must-exist-two-darts-that-are-within-frac-1-sqrt-3-feet-of-each-other/answer/Anders-Kaseorg" rel="nofollow noreferrer">answer</a> to the same question on Quora, 2014-03-04.)</p> <p>Draw non-overlapping circles of diameter <span class="math-container">$α$</span> centered at the points, with total area <span class="math-container">$19π\left(\frac α2\right)^2$</span>. These circles all fit into the fundamental unit of this plane tiling with copies of the hexagon spaced <span class="math-container">$α$</span> apart:</p> <p><a href="https://i.stack.imgur.com/eXzca.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eXzca.png" alt="hexagons" /></a></p> <p>which is a hexagon of side length <span class="math-container">$1 + \frac{α}{\sqrt 3}$</span> and area <span class="math-container">$k = \frac{3 \sqrt 3}{2}\left(1 + \frac{α}{\sqrt 3}\right)^2$</span>. By Thue’s circle packing theorem, the total area of the circles must be at most <span class="math-container">$\frac{π}{2\sqrt 3}k$</span>. This argument yields <span class="math-container">$α ≤ \frac{\sqrt 3}{\sqrt{19} - 1} \approx 0.51566$</span>, and also suffices to bound the number of points at distance <span class="math-container">$\frac12$</span> to <span class="math-container">$19$</span> since <span class="math-container">$\frac{\sqrt 3}{\sqrt{20} - 1} \approx 0.49884$</span>.</p> <p>A stronger theorem from Folkman and Graham, “<a href="http://www.math.ucsd.edu/%7Eronspubs/69_08_packing.pdf" rel="nofollow noreferrer">A packing inequality for compact subsets of the plane</a>” (1969) gives the optimal bound:</p> <blockquote> <p>If <span class="math-container">$A(X)$</span> and <span class="math-container">$P(X)$</span> denote the area and perimeter, respectively, of a compact convex subset <span class="math-container">$X$</span> of the plane, then <span class="math-container">$$\rho(X) \le \frac{2}{\sqrt 3}A(X) + \frac12P(X) + 1.$$</span></p> </blockquote> <p>Here <span class="math-container">$\rho(X)$</span> is the size of the largest set of points in <span class="math-container">$X$</span> at mutual distance at least <span class="math-container">$1$</span>. For a hexagon of side length <span class="math-container">$s$</span> containing <span class="math-container">$19$</span> points at distance at least <span class="math-container">$1$</span>, we must have</p> <p><span class="math-container">$$19 \le \frac{2}{\sqrt 3} \cdot \frac{3 \sqrt{3}}{2}s^2 + \frac12 \cdot 6s + 1 = 3s^2 + 3s + 1,$$</span></p> <p>whence <span class="math-container">$s \ge 2$</span>. Scaling down by a factor of <span class="math-container">$s$</span> shows that <span class="math-container">$α ≤ \frac12$</span>.</p>