qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
34,487
<p>A few years ago Lance Fortnow listed his favorite theorems in complexity theory: <a href="http://blog.computationalcomplexity.org/2005/12/favorite-theorems-first-decade-recap.html" rel="nofollow">(1965-1974)</a> <a href="http://blog.computationalcomplexity.org/2006/12/favorite-theorems-second-decade-recap.html" rel="nofollow">(1975-1984)</a> <a href="http://eccc.hpi-web.de/eccc-reports/1994/TR94-021/index.html" rel="nofollow">(1985-1994)</a> <a href="http://blog.computationalcomplexity.org/2004/12/favorite-theorems-recap.html" rel="nofollow">(1995-2004)</a> But he restricted himself (check the third one) and his last post is now 6 years old. An updated and more comprehensive list can be helpful.</p> <blockquote> <p>What are the most important results (and papers) in complexity theory that every one should know? What are your favorites?</p> </blockquote>
Ryan Williams
2,618
<p>I think Lance's choices from the past are pretty comprehensive, although I might add a couple more from the lower bounds department which for some reason are not well-known:</p> <blockquote> <p>John E. Hopcroft, Wolfgang J. Paul, Leslie G. Valiant: On Time Versus Space. J. ACM 24(2): 332-337 (1977)</p> <p>Wolfgang J. Paul, Nicholas Pippenger, Endre Szemerédi, William T. Trotter: On Determinism versus Non-Determinism and Related Problems (Preliminary Version) FOCS 1983: 429-438</p> </blockquote> <p>The first paper shows that <span class="math-container">$TIME[t] \subseteq SPACE[t/\log t]$</span> (so, <span class="math-container">$SPACE[t]$</span> is not contained in <span class="math-container">$TIME[o(t \log t)]$</span>). This result has since been generalized (from Turing machines) to all the &quot;modern&quot; models of computation. (For references, look at citations on Google scholar.)</p> <p>The second paper shows that for multitape Turing machines, <span class="math-container">$NTIME[n] \neq TIME[n]$</span>. This is really the only generic separation of nondeterministic and deterministic time that we know. It is not known whether this result extends to more modern models of computation. Perhaps one reason why these results are not better known is that many seem to believe that their approaches are a dead end, more or less. (There's some mathematical evidence for that: the techniques do break down if you try to push them any further, but it's always possible these techniques could be combined with something new.)</p> <p>As for the last 6 years... <strike>I'll have to think about my choices for the &quot;best papers&quot; since then. Expect an update to this answer later.</strike> I think the following work over the last six years should be among those that everyone should know about. That doesn't mean that I think they're &quot;best&quot;, it just means I am trying to answer the original question. It's a very biased list.</p> <ul> <li><p>Irit Dinur's combinatorial proof of the PCP theorem</p> </li> <li><p>Omer Reingold's logspace algorithm for st-connectivity</p> </li> <li><p>Ketan Mulmuley's geometric complexity theory program</p> </li> <li><p>Subhash Khot's Unique Games Conjecture and what it entails (this was initiated earlier than 6 years ago but it has become much more important in the last 6 years)</p> </li> <li><p>Russell Impagliazzo and Valentine Kabanets' &quot;Derandomizing polynomial identity testing means proving circuit lower bounds&quot;</p> </li> <li><p>Lance Fortnow et al.'s time-space lower bounds for SAT <em>(this is excluding all work that I have personally done on this, you can decide for yourself if you should know about that)</em></p> </li> </ul> <p>I left out a bunch of very important things because the list is 6 items. Sorry.</p>
704,921
<p>This is the question: $$ \frac{(2^{3n+4})(8^{2n})(4^{n+1})}{(2^{n+5})(4^{8+n})} = 2 $$ I've tried several times but I can't get the answer by working out.I know $n =2$, can someone please give me some guidance? Usually I turn all the bases to 2, and then work with the powers, but I probaby make the same mistake every time, unfortunately I don't know what that is. Thank you in advance.</p> <p><em>EDIT</em></p> <p>This is what I simplified it to in the beginning of every attempt.</p> <p>$$ \frac{(2^{3n} *16)(2^{6n})(2^{2n}*4)}{(2^{n}*32)(2^{n}*2^{16})} = 2 $$</p> <p>Therefore</p> <p>$$ \frac{64(2^{3n+6n+2n})}{(2^{16}*32)(2^{2n})} = 2 $$ <br> I simplified further: $$ \frac{2^{11n}}{32768(2^{2n})} =2 $$ <br> $$ 2^{11n} = (2^{2n+1})*32768 \\ $$ $$ \frac{2^{11n}}{2^{2n+1}} = 32768 $$</p> <p>$$ \frac{2^{11n}}{2*2^{2n}} = 32768 $$</p> <p>And this is the furthest I get, what do i do now?</p>
William Chang
133,204
<p>$$ \frac{2^{11n}}{2*2^{2n}} = 32768 $$</p> <p><em>And this is the furthest I get, what do I do now?</em></p> <p>Multiply both sides by 2. Then you get $$ \frac{2^{11n}}{2^{2n}} = 65536. $$ $$\implies 2^{11n-2n}=2^{9n}=65536.$$ $$\therefore n=\frac{log_2(65536)}{9}=\frac{16}{9}.$$</p> <p>EDIT: This is wrong, and there should be a $3n$ instead of a $2n$ in the denominator to start with, yielding $n=2$ because $4^{8+n}=4^{16+2n}$, not $4^{16+n}$ as in the question.</p>
1,114,502
<p>I attempted the following solution to the birthday "paradox" problem. It is not correct, but I'd like to know where I went wrong.</p> <p>Where $P(N)$ is the probability of any two people in a group of $N$ people having the same birthday, I consider the first few values.</p> <p>For two people, the probability that they share a birthday is simply $1/365$, not counting leap years. For three people, it is the probability of every combination of two of them "ored" together, which is simply the sum of the probabilities of every combination of two people. Thus,</p> <p>$$ P(2)=P(AB)=\frac{1}{365} $$ $$ P(3)=P(AB)+P(AC)+P(BC)=3\times P(2) $$ $$ P(4)=P(AB)+P(AC)+P(AD)+P(BC)+P(BD)+P(CD)=6\times P(2) $$</p> <p>Where $P(XY)$ is used to denote the probability of persons $X$ and $Y$ sharing a birthday. You can see pretty clearly that the coefficients are binomial.</p> <p>$$ P(N)=\binom{N}{2}\times P(2)=\frac{N!}{2!(N-2)!}\cdot\frac{1}{365}=\frac{N(N-1)}{730} $$</p> <p>Now according to the pidgeonhole principal, we should have $P(366)=1$, which this expression clearly violates (instead it gives $P(366)=183$). So clearly I'm doing <em>something</em> wrong.</p>
Thomas Andrews
7,933
<p>You are over-counting the cases where three people or more share a birthday.</p> <p>You are also over-counting the case were $A$ and $B$ have the same birthday and $C$ and $D$ have the same birthday.</p> <p>If $A, B, C$ have the same birthday, then you've counted that case $3$ times, when you only want to count it once. It gets even messier for $4$ ore more with the same birthday, and as $N$ gets larger, that is more and more likely.</p> <p>It is <em>much</em> easier to calculate the probability that nobody shares a birthday, then subtract that value from $1$.</p> <p>What you've computed is the expected number of pairs that have the same birthday.</p> <p>(I just ran 10,000 simulations generating $366$ numbers from $1$ to $365$ and counting equal pairs, and the average number of pairs was $183.12$, pretty close to your $183.$</p>
312,847
<p>Let <span class="math-container">$k$</span> be a global field, and let <span class="math-container">$G = \mathbf G(\mathbb A_k)$</span> for a connected, reductive group <span class="math-container">$\mathbf G$</span> over <span class="math-container">$k$</span>. In <a href="https://services.math.duke.edu/~hahn/Chapter3.pdf" rel="noreferrer">these</a> notes by Jayce Getz and Heekyoung Hahn, a unitary representation of <span class="math-container">$G$</span> is a Hilbert space <span class="math-container">$V$</span> together with a continuous homomorphism <span class="math-container">$\pi: G \rightarrow \operatorname{GL}(V)$</span> whose image is contained in the group <span class="math-container">$U(V)$</span> of unitary operators on <span class="math-container">$V$</span>. </p> <p>What is the topology on <span class="math-container">$\operatorname{GL}(V)$</span> (which I assume is the group of bounded linear operators on <span class="math-container">$V$</span>) being considered here? Is it the induced topology coming from the norm topology?</p> <p>I am trying to compare this definition with one given by Gerald Folland in <em>A Course in Abstract Harmonic Analysis</em>, which requires that for each <span class="math-container">$v \in V$</span> the map <span class="math-container">$g \mapsto \pi(g)v$</span> be continuous <span class="math-container">$G \rightarrow V$</span>, where <span class="math-container">$V$</span> is taken in the norm topology. Are these two definitions of unitary representations different?</p> <p>This matters because one later defines the Fell topology on the unitary dual <span class="math-container">$\hat{G}$</span> of <span class="math-container">$G$</span>, and I want to know which representations are actually in <span class="math-container">$\hat{G}$</span>.</p>
Uri Bader
89,334
<p>Considering a topological group <span class="math-container">$G$</span>, a Hilbert space <span class="math-container">$V$</span> and a corresponding unitary representation, that is a homomorphism <span class="math-container">$\pi:G\to U(V)$</span>, the following are equivalent:</p> <ol> <li><p><span class="math-container">$\pi$</span> is continuous when <span class="math-container">$U(V)$</span> is taken with the weak operator topology.</p></li> <li><p><span class="math-container">$\pi$</span> is continuous when <span class="math-container">$U(V)$</span> is taken with the strong operator topology.</p></li> <li><p>For every <span class="math-container">$v\in V$</span>, the orbit map <span class="math-container">$G\to V$</span> given by <span class="math-container">$g\mapsto gv$</span> is continuous.</p></li> <li><p>The action map <span class="math-container">$G\times V\to V$</span> given by <span class="math-container">$(g,v)\mapsto \pi(g)(v)$</span> is continuous.</p></li> </ol> <p>In fact, the implications <span class="math-container">$4 \Rightarrow 3\Rightarrow 2\Rightarrow 1$</span> are trivially true, while <span class="math-container">$1\Rightarrow 4$</span> follows from the uniform convexity of <span class="math-container">$V$</span> (thus an analogue is valid for any isometric representation on a uniformly convex space).</p> <p>The standard terminology is to refer to <span class="math-container">$\pi$</span> as a <em>continuous unitary representation</em> if it satisfies the properties above.</p> <hr> <p>It should be mentioned that for non-discrete locally compact groups (eg for your <span class="math-container">$\mathbf{G}(\mathbb{A}_k)$</span>) unitary representations are almost never continuous when <span class="math-container">$U(V)$</span> is endowed with the norm topology, so a careful writer is unlikely to make such an assumption without mentioning it explicitly.</p>
110,078
<p>Let $0&lt; \alpha&lt; n$, $1 &lt; p &lt; q &lt; \infty$ and $\frac{1}{q}=\frac{1}{p}-\frac{\alpha}{n}$. Then: $ \left \| \int_{\mathbb{R}^n} \frac{f(y)dy}{|x-y|^{n-\alpha} } \right\|_{L^q(\mathbb{R}^n)}\leq$ $C\left\| f\right\| _{L^p(\mathbb{R^n})}$.</p>
Bazin
21,907
<p>The function $\vert x\vert^{\alpha-n}$ is radial homogeneous of degree $\alpha-n$, so its Fourier transform is radial homogeneous of degree $-(\alpha-n)-n=-\alpha$ (both locally integrable since $\alpha &gt;0$ and $-\alpha&gt;-n$ so both are distributions which are easily seen as temperate: Fourier transforms make sense), so your convolution operator is in fact the Fourier multiplier $\vert D_x\vert^{-\alpha}$. The question at hand is thus (with homogeneous spaces) $$ \Vert u\Vert_{W^{-\alpha,q}}\lesssim \Vert u\Vert_{W^{0,p}},\quad \text{i.e. }W^{0,p}\subset W^{-\alpha,q}, $$ which is a particular case of Sobolev injection since $$0&gt;-\alpha,\quad p &lt; q,\quad \frac{1}{p}-\frac{1}{q}=\frac{\alpha}{n}. $$</p>
41,155
<p>Lauren has 20 coins in her piggy bank, all dimes and quarters. The total amount of money is $3.05. How many of each coin does she have?</p>
ubpdqn
1,997
<p>If you wanted to color code the relationships (using Murta's g):</p> <p>Defining edge styles by weight:</p> <pre><code>es = Join @@ MapThread[ Thread[#1 -&gt; #2] &amp;, {(#[[All, 1]] &amp; /@ SortBy[GatherBy[{#, PropertyValue[{g, #}, EdgeWeight]} &amp; /@ EdgeList[g], #[[2]] &amp;], #[[2]] &amp;]), {Directive[Red, Thick], Directive[Green, Thick], Directive[Blue, Thick]}}]; </code></pre> <p>Here 1->Red,2->Green, 3->Blue. Visualizing:</p> <pre><code>WeightedAdjacencyGraph[data[[2 ;;, 2 ;;]] /. (0 -&gt; \[Infinity]), VertexLabels -&gt; MapIndexed[#2[[1]] -&gt; #1 &amp;, data[[1, 2 ;;]]], EdgeStyle -&gt; es, ImagePadding -&gt; 20] </code></pre> <p><img src="https://i.stack.imgur.com/CSeQ8.png" alt="enter image description here"></p>
1,783,200
<p>Prove or disprove the following statement:</p> <p><strong>Statement.</strong> <em>Continuous for each variables, when other variables are fixed, implies continuous?</em> More clearly, prove or disprove the following problem:</p> <p>Let $\displaystyle f:\left[ a,b \right]\times \left[ c,d \right]\to \mathbb{R}$ for which:</p> <ul> <li>For every $\displaystyle {{x}_{0}}\in \left[ a,b \right]$, $\displaystyle f\left( {{x}_{0}},y \right)$ is continuous on $\displaystyle \left[ c,d \right]$ respect to variable $ \displaystyle y$.</li> <li>For every $ \displaystyle {{y}_{0}}\in \left[ c,d \right]$, $ \displaystyle f\left( x,{{y}_{0}} \right)$ is continuous on $ \displaystyle \left[ a,b \right]$ respect to variable $\displaystyle x$.</li> </ul> <p>Then $\displaystyle f\left( x,y \right)$ is continuous on $ \displaystyle \left[ a,b \right]\times \left[ c,d \right]$. ?</p> <p><a href="https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/" rel="nofollow">https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/</a></p>
MPW
113,214
<p>How about $$f(x,y)=\begin{cases}\frac{xy}{x^2+y^2},&amp;(x,y)\neq(0,0)\\ 0,&amp;(x,y)=(0,0)\\ \end{cases}$$</p>
69,476
<p>Hello everybody !</p> <p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p> <p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p> <p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p> <p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p> <p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p> <p>Thank you !</p> <p>Nathann</p> <p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
Daniel McLaury
6,427
<p><strong>EDIT:</strong> Looks like I overlooked that the OP stipulated he wants to minimize the total number of additions and multiplications. (Although he said he wanted to do that &quot;to start,&quot; so arguably the below is still relevant.)</p> <p>However, to address the question as stated, what you are essentially looking for is the <a href="https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity</a></p> <hr /> <p>Consider the polynomial <span class="math-container">$f(x) = nx$</span>, where <span class="math-container">$n$</span> is an integer. Here are two algorithms which will evaluate this polynomial:</p> <p>Algorithm 1. Multiply <span class="math-container">$n$</span> by <span class="math-container">$x$</span>.</p> <p>Algorithm 2. Calculate <span class="math-container">$x + x + \ldots + x$</span>.</p> <p>Which is more efficient? Given fixed <span class="math-container">$n$</span>, this depends on your processor architecture. And this is just about the simplest case imaginable -- we only have one variable, the polynomial is linear, and we're not even thinking about pipelined calculations yet. Also, as mentioned before, you are going to have to formalize the problem in some way which eliminates the &quot;algorithm&quot; consisting of a table giving the value at each machine-sized number. As stated, I don't think the question is answerable.</p>
69,476
<p>Hello everybody !</p> <p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p> <p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p> <p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p> <p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p> <p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p> <p>Thank you !</p> <p>Nathann</p> <p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
mathreadler
89,237
<p>If we are on an architecture which has <strong>multiple cores</strong>, <strong>CPU pipelines</strong> and <strong>multi-media extensions (MME)</strong> then Horner's method really doesn't have to be best.</p> <p>If the polynomial is large, you can split into as many bins as you have processor cores. If you have two cores, for example $$f(x)=\sum_k\alpha_kx^k=x\left(\sum\alpha_{2k+1} (x^2)^k\right) + \sum\alpha_{2k}(x^2)^k$$ Each sum evaluated in a separate thread. Then of course you can apply any optimization by treating as a separate polynomial if you want.</p> <hr> <p>Another idea is to precalculate $x,x^2,x^4,\cdots$ and store them in a table, then you can evaluate say $x^{43}$ with help of $43 = 32+8+2+1$, so you do $x \cdot x^2 \cdot x^8 \cdot x^{32}$ by grabbing those four numbers from the table. That's 4 multiplications instead of 43 - in general will be logarithmic. Worst case would be 2-logarithm rounded up number of multiplications. This can be useful as parallellized MME-instructions are common on modern CPUs.</p>
2,225,606
<p>Solution: The eigenvalues for $\begin{bmatrix}1.25 &amp; -.75 \\ -.75 &amp; 1.25\end{bmatrix}$ are $2$ and $0.5$. </p> <p>I'm confused on how it's not $1$ and $-1$. If we set up the characteristic matrix: $\begin{bmatrix}5/4 - \lambda &amp; -3/4 \\ -3/4 &amp; 5/4 - \lambda \end{bmatrix}$ </p> <p>$ad-bc=0$</p> <p>$25/16 - \lambda ^2 - 9/16 = 0$</p> <p>$16/16- \lambda ^2=0$</p> <p>$\lambda = 1, -1$</p>
Dave
334,366
<p>Third last line should read $$\frac{25}{16}-\frac{5}{2}\lambda+\lambda^2-\frac{9}{16}=0$$ It looks like you just forgot to expand the bracket $\left(\frac{5}{4}-\lambda\right)^2$ properly using binomial expansion. </p>
1,336,937
<p>I think: <em>A function $f$, as long as it is measurable, though Lebesgue integrable or not, always has Lebesgue integral on any domain $E$.</em></p> <p>However Royden &amp; Fitzpatrick’s book "Real Analysis" (4th ed) seems to say implicitly that “a function could be integrable without being Lebesgue measurable”. In particular, theorem 7 page 103 says: </p> <p><strong>“If function $f$ is bounded on set $E$ of finite measure, then $f$ is Lebesgue integrable over $E$ if and only if $f$ is measurable”.</strong> </p> <p>The book spends a half page to prove the direction “$f$ is integrable implies $f$ is measurable”! Even the book “Real Analysis: Measure Theory, Integration, And Hilbert Spaces” of Elias M. Stein &amp; Rami Shakarchi does the same job!</p> <p>This makes me think there is possibly a function that is not bounded, not measurable but Lebesgue integrable on a set of infinite measure?</p> <p>=== Update: Read the answer of smnoren and me below about the motivation behind the approaches to define Lebesgue integrals. Final conclusion: The starting statement above is still true and doesn't contradict with the approach of Royden and Stein.</p>
Thang
34,339
<p>There is a subtle difference in defining Lebesgue integrals in Real analysis textbooks:</p> <p><strong>I) The approach of Royden &amp; Fitzpatrick (in “Real analysis” 4th ed), Stein &amp; Shakarchi (in “Real Analysis: Measure Theory, Integration, And Hilbert Spaces”)</strong></p> <p>Firstly, it defines Lebesgue integrability and Lebesgue integral for a bounded function (not necessarily measurable) on a domain of finite measure. A bounded function needs to be Lebesgue integrable first (the upper and the lower Lebesgue integral agree), then the integral can be defined to be this common value. The authors’ motivation is try to define “Lebesgue integrability” like “Rieman integrability”: upper integral equals lower integral. </p> <p>However, unfortunately, the upper and lower Lebesgue integrals don’t agree for an arbitrary Lebesgue integrable function, so when the authors move to functions in general (not necessarily bounded), they still have to go back to the requirement "measurable". This sudden appearance of "measurability" is not natural.</p> <p>(Note that the upper/lower Darboux sum in the definition of Rieman integrability can be viewed as step functions, which are a special case of simple functions. So “upper/lower Rieman (Darboux) integral” is a special case of “upper/lower Lebesgue integral”)</p> <p><strong>II) The approach of Folland (in “Real Analysis: Modern Techniques and Their Applications”), Bruckner &amp; Thomsom (in “Real analysis”), Carothers (in “Real analysis”), etc.</strong></p> <p>The construction requires a function to be measurable, and defines the Lebesgue integral to be the upper Lebesgue integral, and when the integral is finite the function is said to be Lebesgue integrable. </p> <p>This approach doesn’t immediately show how Lebesgue integral convers Rieman integral, so later on, the author proves that in the case a function is bounded and the domain of integration is of finite measure: the upper Lebesgue integral equals to the lower Lebesgue integral, which means Lebesgue integral is reduced to Rieman integral.</p>
3,898,818
<p>A (UK sixth form; final year of high school) student of mine raised the interesting question of how to prove that the total angle in the Spiral of Theodorus (formed by constructing successive right-angled triangles with hypotenuses of <span class="math-container">$\sqrt{n}$</span>), diverges.</p> <p>He identified that this is equivalent to proving the divergence of the series <span class="math-container">$$\sum_{r=1}^\infty \arctan \left(\frac{1}{\sqrt r}\right)$$</span> and came up with an interesting proof attempt which didn't conceptually work (although it was very nicely thought of).</p> <p>The best I could offer by way of intuition is that <span class="math-container">$\arctan\left(\frac{1}{\sqrt n}\right) \approx \frac{1}{\sqrt n}$</span>, and the latter series diverges by comparison with <span class="math-container">$\frac{1}{n}$</span>. But the <span class="math-container">$\arctan$</span> value is strictly lesser, so that doesn't convert into a precise proof as far as I can see.</p> <p>He hasn't been taught formal convergence tests (and it's a while since I was taught them!) although I'm sure he'd be very open to learning. However, I can't shake the feeling there ought to be a nice geometrical demonstration that the spiral does indeed keep winding around the starting point.</p>
Raffaele
83,382
<p>It is not &quot;intuition&quot;. By MacLaurin expansion at <span class="math-container">$x=0$</span> we have</p> <p><span class="math-container">$$\arctan\sqrt{x}= \sqrt{x}+O\left(x^{3/2}\right)$$</span> Therefore <span class="math-container">$$\arctan\sqrt{\frac{1}{n}}\sim \sqrt{\frac{1}{n}};\quad n\to\infty$$</span> So the given series diverges.</p> <p><strong>edit</strong></p> <p>This is a <em>rigorous</em> proof of the divergence</p>
3,226,028
<h2>Problem</h2> <p>I want to know how to solve the differential equation <span class="math-container">$$ \dot{x} + a\cdot x - b\cdot \sqrt{x} = 0 $$</span> for <span class="math-container">$a&gt;0$</span> and both situations: for <span class="math-container">$b &gt; 0$</span> and <span class="math-container">$b &lt; 0$</span>. </p> <h2>My work</h2> <p>One can separate the variables to obtain: <span class="math-container">$$ \frac{dx}{b\cdot \sqrt{x} - a\cdot x} = dt$$</span> but I do not know how to proceed ... <a href="https://www.wolframalpha.com/input/?i=solve+x%27(t)%2Bax(t)-bsqrt(x(t))+%3D+0" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=solve+x%27(t)%2Bax(t)-bsqrt(x(t))+%3D+0</a> it seems to have an explicit solution ... </p> <h2>Context</h2> <p>This problem occurs in the following context: <span class="math-container">$$ \ddot{X} + a \cdot \dot{X} = f(X)$$</span> then multiplying both sides with <span class="math-container">$2\dot{X}^T$</span> one obtains: <span class="math-container">$$ (\dot{X}^T\dot{X})' + 2a\cdot \dot{X}^T\dot{X} = 2\dot{X}^T f(X)$$</span> Let <span class="math-container">$v= \dot{X}^T \dot{X}$</span> and the above differential equation arises ... </p>
Community
-1
<p>The next step is to integrate,</p> <p><span class="math-container">$$t+c=\int\frac{dx}{b\sqrt x-ax}=\int\frac{d\sqrt x^2}{b\sqrt x-a\sqrt x^2}=\int\frac{2\,d\sqrt x}{b-a\sqrt x}=-\frac2a\log\left(\frac ba-\sqrt x\right).$$</span></p> <p>From this you can draw <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$x=\left(\frac ba-e^{-a(t+c)/2}\right)^2.$$</span></p>
31,308
<p>Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)</p> <p>One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,</p> <p>$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$</p> <p>What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.</p> <p>Thanks.</p> <p>EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by Kent and Mardia, 1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.</p> <p>Both are in Statistics at Leeds,</p> <p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/</a> </p> <p><a href="http://www.maths.leeds.ac.uk/~john/" rel="nofollow">http://www.maths.leeds.ac.uk/~john/</a> </p> <p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html</a> </p> <p>Scanned article: <a href="http://www.gigasize.com/get.php?d=90wl2lgf49c" rel="nofollow">http://www.gigasize.com/get.php?d=90wl2lgf49c</a> </p> <p>FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$ </p> <p>LATER, ALSO FROM THE OP: I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?</p>
Olumide
7,486
<p>Thanks everyone for answers given so far. Now for some really ignorant questions from me. I'm really trying to make sense of generalized functions, so here goes:</p> <p>Its often said that the concept of generalized functions helps to assign integrals to otherwise integrable functions (pardon my phrasing). What confuses me is why multiplying an otherwise unintegrable function with an "arbitrary test function" and then integrating the product is a valid. This seems to me to be the reason for the Schwartz class of test functions; namely functions that can "cool down" faster than any polynomial can blow up. Or in other words, given an ill-behaved, ready-to-blow-up function, a test function that can "tame it" can always be chosen ...</p> <p>Is this right?</p>
2,077,958
<p>Or more abstractly, let $T \in \mathcal{L}(U,V)$ be a linear map over finite dimensional vector spaces, I need to prove that $T^*$ and $T^* T$ have the same range.</p> <p>The direction $v \in range(T^*T) \rightarrow v \in range(T^*)$ is obvious. I'm stuck on the other direction. Suppose $u\in range(T^*)$, then there exists $v \in V$ such that $u = T^*v$. Now how do I show $u \in range(T^*T)$? (I've proved that $T$ and $T^*T$ have the same nullspace, but that doesn't seem helpful here)</p>
A.Γ.
253,273
<p>You know that $u=T^*v$. Let $Tw$ be the orthogonal projection of $v$ onto $\operatorname{range}T$, i.e. $$ v-Tw\bot \operatorname{range}T\quad\Leftrightarrow\quad T^*(v-Tw)=0\quad\Leftrightarrow\quad T^*v=T^*Tw. $$</p>
1,169,336
<p>Using the formal definition of convergence, Prove that $\lim\limits_{n \to \infty} \frac{3n^2+5n}{4n^2 +2} = \frac{3}{4}$.</p> <p>Workings:</p> <p>If $n$ is large enough, $3n^2 + 5n$ behaves like $3n^2$</p> <p>If $n$ is large enough $4n^2 + 2$ behaves like $4n^2$</p> <p>More formally we can find $a,b$ such that $\frac{3n^2+5n}{4n^2 +2} \leq \frac{a}{b} \frac{3n^2}{4n^2}$</p> <p>For $n\geq 2$ we have $3n^2 + 5n /leq 3n^2.</p> <p>For $n \geq 0$ we have $4n^2 + 2 \geq \frac{1}{2}4n^2$</p> <p>So for $ n \geq \max\{0,2\} = 2$ we have:</p> <p>$\frac{3n^2+5n}{4n^2 +2} \leq \frac{2 \dot 3n^2}{\frac{1}{2}4n^2} = \frac{3}{4}$ </p> <p>To make $\frac{3}{4}$ less than $\epsilon$:</p> <p>$\frac{3}{4} &lt; \epsilon$, $\frac{3}{\epsilon} &lt; 4$</p> <p>Take $N = \frac{3}{\epsilon}$ </p> <p>Proof:</p> <p>Suppose that $\epsilon &gt; 0$</p> <p>Let $N = \max\{2,\frac{3}{\epsilon}\}$</p> <p>For any $n \geq N$, we have that $n &gt; \frac{3}{\epsilon}$ and $n&gt;2$, therefore</p> <p>$3n^2 + 5n^2 \leq 6n^2$ and $4n^2 + 2 \geq 2n^2$</p> <p>Then for any $n \geq N$ we have</p> <p>$|s_n - L| = \left|\frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}\right|$</p> <p>$ = \frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}$</p> <p>$ = \frac{10n-3}{8n^2+4}$</p> <p>Now I'm not sure on what to do. Any help will be appreciated.</p>
kobe
190,421
<p>Note</p> <p>$$\left|\frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}\right| = \left|\frac{20n - 6}{4(4n^2 + 2)}\right| = \frac{|10n - 3|}{4(2n^2 + 1)} &lt; \frac{10n + 3}{8n^2} &lt; \frac{20n}{8n^2} = \frac{5}{2n}.\tag{1}$$</p> <p>Hence, given $\epsilon &gt; 0$, setting $N &gt; 2\epsilon/5$ will make the left hand side of $(1)$ less than $\epsilon$ whenever $n \ge N$. Therefore</p> <p>$$\lim_{n\to \infty} \frac{3n^2 + 5n}{4n^2 + 2} = \frac{3}{4}.$$</p>
3,296,122
<p>I was given a problem in which a matrix <span class="math-container">$A$</span> was specified along with its determinant value, now the determinant of another matrix <span class="math-container">$B$</span> was asked to be found out whose indices were scalar multiplies of <span class="math-container">$A$</span>. What I did was to factor out the scalar from the matrix and substitute the value of matrix <span class="math-container">$A$</span>. However to my great surprise I found out that each row had to be factored out and the scalars would be indexed to its power by the number of rows and then the determinant operation be applied. This is something that took me aback and left me absolutely confused. I am looking for a clear cut and elimination by contradiction (yet trivial) explanation to this.</p>
amd
265,466
<p>One of the basic properties of determinants that you should have learned is that they are multilinear functions of the columns (or rows) of a matrix. Indeed, some treatments start with this and a few additional properties as the definition of a determinant and then derive from them the formulas that you’re likely familiar with. </p> <p>Specifically, if <span class="math-container">$A_i$</span> denotes the <span class="math-container">$i$</span>th column of the <span class="math-container">$n\times n$</span> matrix <span class="math-container">$A$</span>, then (with a slight abuse of notation) we can write <span class="math-container">$\det A = \det(A_1,\dots,A_n)$</span>. The multilinearity property says among other things that for every <span class="math-container">$1\le k\le n$</span>, <span class="math-container">$$\det(A_1,\dots,cA_k,\dots,A_n)=c\det(A_1,\dots,A_k,\dots,A_n).$$</span> That is, multiplying a single column of <span class="math-container">$A$</span> by <span class="math-container">$c$</span> multiplies its determinant by <span class="math-container">$c$</span> as well. With <span class="math-container">$cA$</span>, you’ve multiplied <em>every</em> column of <span class="math-container">$A$</span> by <span class="math-container">$c$</span>, therefore <span class="math-container">$\det(cA)=c^n\det(A)$</span>.</p>
1,092,091
<p>I wonder how I can calculate the distance between two coordinates in a $3D$ coordinate-system. Like this. I've read about <em><a href="http://www.purplemath.com/modules/distform.htm" rel="nofollow">the distance formula</a></em>:</p> <p>$$d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$$</p> <p>(How) Can I use that it $3D$ coordinates, or is there any other method?</p> <p>Thanks!</p>
Praveen
140,774
<p>The distance between two points $(x_1,y_1,z_1)$ and $(x_2,y_2,z_2)$ is given by</p> <p>$$d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2+(z_2 - z_1)^2}$$</p>
52,299
<p>Hello everybody.</p> <p>I'm looking for an "easy" example of a (non-zero) holomorphic function $f$ with almost everywhere vanishing radial boundary limits: $\lim\limits_{r \rightarrow 1-} f(re^{i\phi})=0$.</p> <p>Does anyone know such an example.</p> <p>Best CJ</p>
Andrey Rekalo
5,371
<p>I am not sure if this would qualify as 'easy' but the first example of such a function was constructed by Lusin. It can be found in N. Lusin, J. Priwaloff, <a href="http://archive.numdam.org/ARCHIVE/ASENS/ASENS_1925_3_42_/ASENS_1925_3_42__143_0/ASENS_1925_3_42__143_0.pdf" rel="noreferrer">Sur l'unicité et la multiplicité des fonctions analytiques</a>, <em>Ann. Sci. École Norm. Sup.</em> (3), 1925, p. 143-191 (see p. 185).</p>
150,180
<p>I try to read Gross's paper on Heegner points and it seems ambiguous for me on some points:</p> <p>Gross (page 87) said that $Y=Y_{0}(N)$ is the open modular curve over $\mathbb{Q}$ which classifies ordered pairs $(E,E^{'})$ of elliptic curves together with cyclic isogeny $E\rightarrow E^{'}$ of degree $N$. Gross uses on some steps the cyclic isogeny between two elliptic curves over $\mathbb{C}$. One of the books that I have read to understand the theory of modular curves is "A first course in Modular forms, written by Fred Diamond and Jerry Shurman". </p> <p>Theorem 1.5.1.(page 38)</p> <p>Let $N$ be a positive integer. </p> <p>(a) The moduli space for $\Gamma_{0}(N)$ is $$S_{0}(N)=\{[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]:\tau\in H\},$$ with $H$ is the upper half plan. Two points $[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]$ and $[E_{\tau^{'}},\langle1/N+\Lambda_{\tau^{'}}\rangle]$ are equal if and only if $\Gamma_{0}(N)\tau=\Gamma_{0}(N)\tau^{'}$. Thus there is a bijection $$\psi:S_{0}(N)\rightarrow Y_{0}(N), [\mathbb{C}/\Lambda_{\tau},\langle1/N+\Lambda_{\tau}\rangle]\mapsto \Gamma_{0}(N)\tau.$$</p> <p>So, how can we make the link between the equivalence classes of the enhanced elliptic curves for $\Gamma_{0}(N)$ (= the equivalence classes of $(E,C)$ where $E$ is a complex elliptic curve and $C$ is a cyclic subgroup of $E$ of order $N$) defined in Diamond/Shurman's book and the cyclic isogenies $E\rightarrow E^{'}$ of degree $N$ used by Gross.</p> <p>I also ask if there is any other paper which explains the theory of Heegner points explicitly? </p> <p>I have look at Darmon's note and Gross-Zagier paper "Heegner points and derivatives of L-series" and it seems that the both were influenced by Gross's paper! Is there any other paper which explains Heegner points explecitely and independently of Gross's paper?</p> <p>(I keep this post open for any further question about Gross's paper and I apologise for any mistakes in my English.)</p> <p>Thank you.</p>
Eric Wofsey
75
<p>Here's a negative answer: there can be no self-dual way to pass from the category of finitely generated projective modules to the category of all finitely generated modules (over, say, a Noetherian ring). To see this, note that the category of finitely generated projective modules is self-dual via the functor $Hom(-,R)$, but the inclusion of projective modules into all modules is obviously not usually self-dual (for instance, since projective modules are not the same as injective modules).</p> <p>Thus you cannot say that coherent sheaves are the "abelian envelope" of vector bundles; any construction of coherent sheaves from vector bundles must in some way care about which direction maps are going (e.g. ya-tayr's suggestion, which adjoins formal cokernels of maps of vector bundles but not formal kernels).</p>
2,886,544
<p>For some set $V \subset [a,b]^d$, define the convex hull of $V$ as the set</p> <p>$$\{\lambda_1v_1 + ... + \lambda_kv_k: \ \lambda_i \ge 0, \ v_i \in V, \ \sum_{i=1}^k \lambda_i = 1, k = 1, 2, 3, ...\}.$$</p> <p>I don't understand why exactly these vectors form the convex hull of $V$. Why wouldn't I be able to choose $\lambda_i = 1$ and $\lambda_j = 0$ for $j \neq i$ and thus make every $v_i \in V$ be a part of the convex hull?</p>
Bernard
202,857
<p>Any convex set $C$ which contains $v_1,\dots,v_k$, also contains the segments $[v_i,v_j]\;(1\le i,j\le k)$. These segments have a parametric representation $\;tv_i+(1-t)v_j\;(0\le t\le1)$, which we may rewrite, setting $\lambda_i=t$, $\lambda_j=1-t$: $$\lambda_iv_i+\lambda_jv\in C,\quad\lambda_i,\lambda_j\ge 0,\enspace\lambda_i+\lambda_j=1.$$ In other words, $C$ must contain all barycenters of $v_i$ and $v_j$ with non-negative weights.</p> <p>An easy induction shows it must contain all barycenters of $v_1,\dots, v_k$ with non-negative weights, which is essentially what the definition says.</p>
638,012
<p>I want to know if there exists a generalization of L'Hopital rule in $n$ dimensions? For example, let us consider this <a href="http://answers.yahoo.com/question/index?qid=20101028114107AAtmZ9l" rel="nofollow">problem</a>.</p> <p>There it is just said that we should take separate path and see if they will end with same number,but can't we generalize L'Hopital rule and partial derivatives in $n$ dimensions? Just a little hint will help me too much.</p>
Lutz Lehmann
115,115
<p>l'Hopitals rule is, in its essence, an extension or corollary from the (extended) mean value theorem. There is no such simple mean value theorem in the vector case. </p> <p>Your example</p> <p>$$\frac{4x^2-y^2}{2x-y}=\frac{(2x-y)(2x+y)}{2x-y}$$ </p> <p>works because you can cancel the common factor. But take any example where $(1,2)$ is still the common zero but now without common factor, like</p> <p>$$\frac{x+1-y}{5x-1-2y},$$</p> <p>then on almost every line through $(1,2)$ you get a limit by l'Hopitals rule, but the limit is different for every direction.</p>
638,012
<p>I want to know if there exists a generalization of L'Hopital rule in $n$ dimensions? For example, let us consider this <a href="http://answers.yahoo.com/question/index?qid=20101028114107AAtmZ9l" rel="nofollow">problem</a>.</p> <p>There it is just said that we should take separate path and see if they will end with same number,but can't we generalize L'Hopital rule and partial derivatives in $n$ dimensions? Just a little hint will help me too much.</p>
pppqqq
58,784
<p>Some thoughts.</p> <p>Let $f,g\colon \mathbb R ^2\to \mathbb R$ be $C^2$ functions over $\mathbb R ^2$ such that $f(0,0)=g(0,0)=0$. Suppose that $g$ is injective in a neighborood of $( 0,0 )$, so that $\frac{f(x,y )}{g(x,y)}$ is well defined.</p> <p>Given a couple $(x,y)$ sufficiently near the origin, it is true that $$\dfrac{f(x,y)}{g(x,y)}=\dfrac{\partial _x f(\tau x,\tau y)x+\partial _y f(\tau x,\tau y)y}{\partial _x f(\tau x,\tau y)x+\partial _y f(\tau x,\tau y)y},\qquad \tau \in (0,1),$$ (this is Cauchy's theorem applied to the functions $f \circ \gamma$ and $g\circ \gamma$, $\gamma(t)=(tx,ty)$).</p> <p>So, if $$\ell = \{\dfrac{\partial _x f(x,y) \cdot x + \partial _y f(x,y) \cdot y}{\partial _x g(x,y) \cdot x + \partial _y g(x,y) \cdot y}\}_{(x,y)\to (0,0)}$$</p> <p>exists, then the limit of $\frac{f}{g}$ also exists and it is equal to $\ell$.</p> <hr> <p><strong>EDIT</strong>. After some thought, I'm not sure that the existence of this limit, implies what I was claiming. I think some extra hypotesis are required, but I can't see which are, right now. Actually I'm trying to prove it only using $C^2$ continuity. </p> <hr> <p><strong>EDIT2</strong>: Proof of the claim.</p> <p>Suppose $\ell$ exists. By continuity of the function:$$h(x,y)=\dfrac{\partial _x f(x,y) \cdot x + \partial _y f(x,y) \cdot y}{\partial _x g(x,y) \cdot x + \partial _y g(x,y) \cdot y},$$ hence local uniform continuity, given $\varepsilon &gt;0$ we can find $\delta&gt;0$ such that $$|h(x',y')-h(x,y)|&lt;\varepsilon$$ if $|(x,y)|&lt;\delta$ and $|(x',y')|&lt;\delta$. </p> <p>So if we take $|(x,y)|&lt;\delta$, nothing that $$h(\tau x,\tau y)=\dfrac{\partial _x f(\tau x,\tau y)x+\partial _y f(\tau x,\tau y)y}{\partial _x f(\tau x,\tau y)x+\partial _y f(\tau x,\tau y)y},$$ we have $$|h(\tau x,\tau y)-\ell|\leq |h(\tau x, \tau y)-h(x,y)|+|h( x , y) -\ell|&lt;2\varepsilon$$ (if we take at most a smaller $\delta$).</p> <hr> <p>The hypotesis on $g$ can be replaced with the much weaker: $g(x,y)\neq g(0,0)$ in a neighborood of $g$. However, I don't know how much the last one can be easy to prove. In the special case of radial functions $g(x^2+y^2)$, this can be reconducted to the crucial hypothesis for using D.H. rule: $\frac{\text{d}g}{\text d r}\neq 0$.</p>
4,084,624
<p>A cool problem I was trying to solve today but I got stuck on:</p> <p>Find the maximum possible value of <span class="math-container">$x + y + z$</span> in the following system of equations:</p> <p><span class="math-container">$$\begin{align} x^2 – (y– z)x – yz &amp;= 0 \tag1 \\[4pt] y^2 – \left(\frac8{z^2}– x\right)y – \frac{8x}{z^2}&amp;= 0 \tag2\\[4pt] z^2 – (x – y)z – xy &amp;= 0 \tag3 \end{align}$$</span></p> <p>I tried extending the first equation to <span class="math-container">$$x^2 - xy + xz - yz = 0 \tag4$$</span></p> <p>I then did the same thing for the second equation and got <span class="math-container">$$y^2 - \frac{8y}{z^2} + xy - \frac{8x}{z^2} = 0 \tag5$$</span></p> <p>For the 3rd equation: <span class="math-container">$$z^2 - xz + yz - xy = 0 \tag6$$</span></p> <p>I realized that adding the first and third equations got <span class="math-container">$$x^2 + z^2 - 2xy = 0 \tag7$$</span></p> <p>I also realized that the 2nd equation could be written as <span class="math-container">$$y^2 - \frac{8}{z^2}(x + y) + xy = 0 \tag8$$</span> but I couldn't get much more. I was thinking that graphs could help me here but I don't have much of an idea.</p> <p>It would be really helpful if someone could explain to me what I could do further to solve the problem.</p>
spiral_being
908,530
<p>It will be edited, but let's begin with attempt..<br /> From (4): <span class="math-container">$y=\frac{x(x+z)}{x+z}=x$</span> (if <span class="math-container">$x \neq -z$</span>) because otherwise <span class="math-container">$x + z$</span> would be <span class="math-container">$0$</span>.<br /> From (7): <span class="math-container">$x^2 + z^2 - 2x^2 =&gt; x^2 = z^2$</span>.<br /> So if <span class="math-container">$x \neq -z$</span>, then <span class="math-container">$x = y = z$</span></p>
3,881,390
<p>I tried multiplying both sided by 4a which leads to <span class="math-container">$(6x+4)^2=40 \pmod{372}$</span> now I'm stuck with how to find the square root of a modulo.</p>
Robert Israel
8,508
<p>First multiply by <span class="math-container">$3^{-1} \equiv 21 \mod 31$</span> to get <span class="math-container">$x^2 + 22 x + 20 \equiv 0 \mod 31$</span>. Then complete the square to get <span class="math-container">$(x+11)^2 \equiv 101 \equiv 8 \mod 31$</span>. Now the square roots of <span class="math-container">$8$</span> mod <span class="math-container">$31$</span> are <span class="math-container">$15$</span> and <span class="math-container">$16$</span>, so <span class="math-container">$x+11 \equiv 15$</span> or <span class="math-container">$16$</span> and <span class="math-container">$x \equiv 4$</span> or <span class="math-container">$5$</span> mod <span class="math-container">$31$</span>.</p> <p>Why, you ask, are those the square roots of <span class="math-container">$8$</span> mod <span class="math-container">$31$</span>? Well, <span class="math-container">$2^5 = 32 \equiv 1$</span>, so <span class="math-container">$8 = 2^3 \equiv 2^{-2}$</span>, and one of the square roots of that is <span class="math-container">$2^{-1} \equiv 16$</span>. The other is <span class="math-container">$-16 \equiv 15$</span>.</p> <p>Of course it might have been simpler to just compute <span class="math-container">$3x^2 + 4 x - 2 \mod 31$</span> for each <span class="math-container">$x$</span> from <span class="math-container">$0$</span> to <span class="math-container">$31$</span> until you find the roots.</p>
1,828,729
<p>I am trying to solve this summation problem . $$\sum\limits_{k = 0}^\infty {\left( {\begin{array}{*{20}{l}} {n + k}\\ {2k} \end{array}} \right)} \left( {\begin{array}{*{20}{l}} {2k}\\ k \end{array}} \right)\frac{{{{( - 1)}^k}}}{{k + 1}}$$ It will be grateful if someone could help me !!</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>The equation of the tangent at $(t,t^4-2t^2-t)$ will be</p> <p>$$\dfrac{y-(t^4-2t^2-t)}{x-t}=4t^3-4t-1$$</p> <p>$$x(4t^3-4t-1)-y=3t^4-2t^2$$</p> <p>So, the equation of the tangents at $(x_1,y_1),(x_2,y_2)$ will respectively be</p> <p>$$x(4x_1^3-4x_1-1)-y=3x_1^4-2x_1^2\ \ \ \ (1)$$</p> <p>$$x(4x_2^3-4x_2-1)-y=3x_2^4-2x_2^2\ \ \ \ (2)$$</p> <p>These two will represent the same straight line iff</p> <p>$$\dfrac{4x_1^3-4x_1-1}{4x_2^3-4x_2-1}=\dfrac{-1}{-1}=\dfrac{3x_1^4-2x_1^2}{3x_2^4-2x_2^2}$$ </p> <p>$$\implies4x_1^3-4x_1-1=4x_2^3-4x_2-1\text{ and }3x_1^4-2x_1^2=3x_2^4-2x_2^2$$ with $x_1\ne x_2$</p> <p>Can you take it home from here?</p>
152,620
<p>The following question is from Golan's linear algebra book. I have posted a solution in the answers. </p> <p><strong>Problem:</strong> Let $F$ ba field and let $V$ be a vector subspace of $F[x]$ consisting of all polynomials of degree at most 2. Let $\alpha:V\rightarrow F[x]$ be a linear transformation satisfying</p> <p>$\alpha(1)=x$</p> <p>$\alpha(x+1)=x^5+x^3$</p> <p>$\alpha(x^2+x+1)=x^4-x^2+1$.</p> <p>Determine $\alpha(x^2-x)$.</p>
Potato
18,240
<p>The idea here is to determine the action on each of the basis elements $1$, $x$, and $x^2$.. By linearity we see </p> <p>$\alpha(x)=\alpha((x+1)-1)=\alpha(x+1)-\alpha(1)=x^5+x^3-x$</p> <p>$\alpha(x^2)=\alpha((x^2+x+1)-(x+1))=\alpha(x^2+x+1)-\alpha(x+1)=-x^5+x^4-x^3-x^2+1$</p> <p>Using linearity again we see</p> <p>$\alpha(x^2-x)=\alpha(x^2)-\alpha(x)=-x^5+x^4-x^3-x^2+1-x^5-x^3+x=-2x^5+x^4-2x^3-x^2+x+1$</p> <p>and if the field is of characteristic 2, the $x^5$ and $x^3$ terms disappear. </p>
56,082
<p>Suppose I have a nested list such as,</p> <pre><code>{{{A, B}, {A, D}}, {{C, D}, {A, A}, {H, A}}, {{A, H}}} </code></pre> <p>Where the elements of interest are,</p> <blockquote> <pre><code>{{A, B}, {A, D}} {{C, D}, {A, A}, {H, A}} {{A, H}} </code></pre> </blockquote> <p>How would I use select to pick up only elements that contain two or more <code>A</code>s in the first part of their sub-elements. In this example I would want the following as an output,</p> <blockquote> <pre><code>{{A,B},{A,D}} </code></pre> </blockquote>
kglr
125
<pre><code>list = {{{a, b}, {a, d}}, {{c, d}, {a, a}, {h, a}}, {{a, h}}} Pick[list, Count[#[[All, 1]], a] &gt;= 2 &amp; /@ list] </code></pre> <p>or</p> <pre><code>Select[list, Count[#[[All, 1]], a] &gt;= 2 &amp;] </code></pre> <p>or</p> <pre><code>Cases[list, _?(Count[#[[All, 1]], a] &gt;= 2 &amp;)] </code></pre> <p>or</p> <pre><code>DeleteCases[list, _?(! Count[#[[All, 1]], a] &gt;= 2 &amp;)] </code></pre> <p>all give</p> <pre><code>{{{a, b}, {a, d}}} </code></pre>
3,242,363
<blockquote> <p>Why does this function, <span class="math-container">$$\tan\left(x ^ {1/x}\right)$$</span> have a maximum value at <span class="math-container">$x=e$</span>?</p> </blockquote> <p><a href="https://i.stack.imgur.com/pqE0Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pqE0Q.png" alt="Graph"></a></p>
Ovi
64,460
<p>This type of math is called "solving a system of linear equations". You should be able to find it in any sort of algebra or precalculus book. </p> <p>You should take the bottom two equations</p> <p><span class="math-container">$a=(x+b) \cdot 0.029+0.3$</span></p> <p><span class="math-container">$b=(x+a) \cdot 0.015$</span></p> <p>We have <span class="math-container">$2$</span> equations and <span class="math-container">$2$</span> unknown variables (<span class="math-container">$a$</span> and <span class="math-container">$b$</span>). Using the techniques of solving a system of linear equations, we should be able to reduce this to <span class="math-container">$a =$</span> "stuff with <span class="math-container">$x$</span>" and <span class="math-container">$b=$</span> "other stuff with <span class="math-container">$x$</span>". Then you can plug these into your first equation, and you have <span class="math-container">$y$</span>.</p> <p>Let me know if you need more details. Going to sleep now.</p>
446,197
<blockquote> <p>Dudley Do-Right is riding his horse at his top speed of $10m/s$ toward the bank, and is $100m$ away when the bank robber begins to accelerate away from the bank going in the same direction as Dudley Do-Right. The robber's distance, $d$, in metres away from the bank after $t$ seconds can be modelled by the equation $d=0.2t^2$. Write a model for the position of Dudley Do-Right as a function of time.</p> </blockquote> <p>The answer is $d=10t-100$. </p> <p>My question is how do you know that it is $-100$, and not $100$? </p> <p>Thanks in advance for your help.</p>
John Joy
140,156
<p>Lets, not make extra work for yourself. You already have a $\cot^2\theta$, so keep it. $$\begin{array}{lll} 3(\cot^2\theta+1)-\csc^2\theta-1&amp;=&amp;(\cot^2\theta+1)+2(\cot^2\theta+1)-\csc^2\theta-1\\ &amp;=&amp;\cot^2\theta+2(\cot^2\theta+1)-\csc^2\theta\\ &amp;=&amp;\cot^2\theta+2\csc^2\theta-\csc^2\theta\\ &amp;=&amp;\cot^2\theta+\csc^2\theta \end{array}$$</p>
719,681
<p>There are 2 similar questions on <span class="math-container">$\log$</span> that I'm unable to solve. </p> <ol> <li><p>Given that <span class="math-container">$\log_a xy^2 = p$</span> and <span class="math-container">$\log_a x^2/y^3 = q $</span>. Express <span class="math-container">$\log_a 1/\sqrt{xy}$</span> or <span class="math-container">$\log_a 1/(xy)^{1/2}$</span> in terms of <span class="math-container">$p$</span> and <span class="math-container">$q$</span> (<span class="math-container">$a$</span> is the base). I was thinking along the line of using <span class="math-container">$p - q$</span> but I can't seem to get <span class="math-container">$y^{1/2}$</span>. The answers are <span class="math-container">$3p+\frac{2q}7$</span> and <span class="math-container">$-5p-\frac q{14}$</span></p></li> <li><p>Given that <span class="math-container">$\log_b(x^3y^2) = p$</span> and <span class="math-container">$\log_b(y/x) = q$</span>. Express <span class="math-container">$\log_b(x^2y)$</span> in terms of <span class="math-container">$p$</span> and <span class="math-container">$q$</span> .(<span class="math-container">$b$</span> is the base)</p></li> </ol>
Mike
17,976
<p>Hint: Try solving for $\log x$ and $\log y$ in terms of $p$ and $q$. Then use that result to get the values you're looking for.</p>
1,028,720
<p>I was wondering the following:</p> <blockquote> <p><strong>Background Question:</strong> Does there exist a Banach space $X$ which contains a copy $X_0 \subset X$ of itself that is not complemented?</p> </blockquote> <p>By "$X_0$ is a copy of $X$", I mean $X_0 \cong X$ via an invertible, bounded, linear map. Some googling turned up that the answer to the above question is "yes". This <a href="https://mathoverflow.net/questions/140557/quotients-of-linfty">thread</a> points to a <a href="http://archive.numdam.org/ARCHIVE/CM/CM_1981__43_1/CM_1981__43_1_133_0/CM_1981__43_1_133_0.pdf" rel="nofollow noreferrer">paper</a> containing an example of an uncomplemented copy of $\ell^1$ in $\ell^1$. Since the proof seems to depend on some pretty fiddly analysis, and since answering the above question is only incidental to the aims of that paper, I was wondering whether there might be a simpler example of this phenomenon. In particular, I thought that the following example might work:</p> <blockquote> <p><strong>Example(?):</strong> The most famous, and probably the most elementary, example of an uncomplemented subspace is the canonical copy of $c_0$ in $\ell_\infty$. With this in mind, I thought maybe one could use the following: \begin{align*} X = c_0 \oplus \ell_\infty \oplus \ell_\infty \oplus \ell_\infty \oplus \ldots &amp;&amp; X_0 = \{0\} \oplus c_0 \oplus \ell_\infty \oplus \ell_\infty \oplus \ell_\infty \oplus \ldots \end{align*} The hope is that, if $F \subset X$ was a complement for $X_0$, then this would imply $\pi_2(F) \subset \ell_\infty$ was a complement for $c_0 \subset \ell_\infty$ (a contradiction). Here, $\pi_2$ is projection onto the second factor $X \to \ell_\infty$. </p> </blockquote> <p>Towards completing the argument, I have already asked <a href="https://math.stackexchange.com/questions/1027012/if-e-is-not-complemented-in-x-is-e-oplus-0-not-complemented-in-x-o">this question</a>, but it hasn't attracted much attention. So, I am asking this (hopefully) better motivated question in addition. So, to reiterate, my question is:</p> <blockquote> <p><strong>Question:</strong> Is $\{0\} \oplus c_0 \oplus \ell_\infty \oplus \ell_\infty \oplus \ell_\infty \oplus \ldots$ complemented in $c_0 \oplus \ell_\infty \oplus \ell_\infty \oplus \ell_\infty\oplus \ldots$?</p> </blockquote>
Tomasz Kania
17,929
<p>By the <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur_theorem" rel="nofollow noreferrer">Banach-Mazur theorem</a>, each separable Banach space embeds into $C[0,1]$–in particular $C[0,1]\oplus \ell_2$ does. However, no copy of this space in $C[0,1]$ can be complemented, because if it were, so would be $\ell_2$ (as <a href="https://math.stackexchange.com/questions/1027012/if-e-is-not-complemented-in-x-is-e-oplus-0-not-complemented-in-x-o/1030066#1030066">the relation of being complemented is transitive</a>) but this space is reflexive and $C[0,1]$ has the <a href="http://en.wikipedia.org/wiki/Dunford%E2%80%93Pettis_property" rel="nofollow noreferrer">Dunford-Pettis property</a>. Okay, embed now $C[0,1]\oplus \ell_2$ into $C[0,1]$ and form a direct sum with $\ell_2$ to have a concrete example.</p> <p>Regarding your second question, these two spaces are isomorphic however if you treat the former as a subspace of the latter, then it is not complemented by the <a href="https://eudml.org/doc/38637" rel="nofollow noreferrer">Phillips–Sobczyk</a> theorem.</p> <p>More generally, if $K$ is a compact metric space such that $C(K)$ is not isomorphic to $c_0$ then $C(K)$ contains an uncomplemented copy of itself (<em>e.g.</em>, take $K=[0,1]$). By a result of Bourgain, $\ell_1$ (hence $L_1$ too) contain uncomplemented copies of themselves. The same is true for $\ell_p$ where $p\neq 2,\infty$ as well as for the <a href="http://en.wikipedia.org/wiki/Tsirelson_space" rel="nofollow noreferrer">Tsirelson space</a> and its dual.</p> <p>To the best of my knowledge the full list of so-far known spaces $X$ with the property that every isomorphic copy of $X$ in $X$ is complemented reads as follows:</p> <ul> <li>injective Banach spaces, <em>e.g.</em>, $\ell_\infty(\Gamma)$ or $C(K)$ where $K$ is extremely disconnected,</li> <li>Hilbert spaces (separable or not),</li> <li>$c_0(\Gamma)$ for any set $\Gamma$,</li> <li>$c_0(\Gamma) \oplus H$ where $H$ is a Hilbert space,</li> <li>hereditarily indecomposable Banach spaces and indecomposable $C(K)$-spaces,</li> <li>certain finite sums of the above-mentioned spaces but not all sums: $c_0\oplus \ell_\infty$ contains an uncomplemented copy of itself.</li> </ul> <p>Everybody is welcome to extend this list.</p>
402,427
<p><em>Sorry if I don't use the words properly, I haven't learnt these things in English, only some of the words. Anyway, I'm practicing to one of my exams and sadly this task seemed more challanging for me than it should be. Some kind of explain would help a lot!</em></p> <p>10 meters of clothes have 6 holes in it.</p> <p>a) What kind of distribution does the number of holes per meter follow? (<em>I think it must be Poisson distribution)</em></p> <p>b) What's the probability that there's more than 10 holes in 5 meters of clothes?</p>
Ayman Hourieh
4,583
<p><strong>Hint</strong>: Define:</p> <p>$$ \varphi(a + b \omega) = \begin{cases} 0 &amp;: a \text{ even} \\ 1 &amp;: a \text{ odd} \end{cases} $$</p> <p>Show that this is a homomorphism from $\Bbb Z[\omega]$ to $\Bbb Z/2\Bbb Z$ and find its kernel. Apply the first isomorphism theorem.</p>
402,427
<p><em>Sorry if I don't use the words properly, I haven't learnt these things in English, only some of the words. Anyway, I'm practicing to one of my exams and sadly this task seemed more challanging for me than it should be. Some kind of explain would help a lot!</em></p> <p>10 meters of clothes have 6 holes in it.</p> <p>a) What kind of distribution does the number of holes per meter follow? (<em>I think it must be Poisson distribution)</em></p> <p>b) What's the probability that there's more than 10 holes in 5 meters of clothes?</p>
DonAntonio
31,254
<p>Denoting $\,\Bbb Z_p:=$ the prime field of characteristic $\,p\,$ , try to follow and prove the following:</p> <p>$$\Bbb Z[w]/(2,w)\cong\left(\Bbb Z[w]/(2)\right)/\left((2,w)/(2)\right)\cong\Bbb Z_2[w]/(w)\cong\Bbb Z_2$$</p>
1,936,043
<p>I would like to prove that the sequence $n^{(-1)^{n}}$ is divergent. </p> <p>My thoughts: I know $(-1)^n$ is divergent, so $n$ to the power of a divergent sequence is still divergent? I am not sure how to give a proper proof, pls help!</p>
fleablood
280,126
<p>Or to be direct...</p> <p>Let $r \in \mathbb R$. Let $\epsilon &gt; 0$ </p> <p>For any $M$ let $m &gt; \max (r + \epsilon, M); m$ even. So $m - r &gt; \epsilon &gt; 0$.</p> <p>Then $|m^{(-1)^m} - r| = |m -r|= m-r &gt; \epsilon$. So the sequence doesn't converge to any real $r$.</p>
506,394
<p>Let $A=\{g\in C([0,1]):\int_{0}^{1}|g(x)|dx&lt;1\}$. If $p\in [0,\infty]$, is $A$ an open set of $(C([0,1]), \left\|{\cdot}\right\|_p)$?</p> <p>Is it obvious that if $p=1$ then $A$ is open in $(C([0,1]), \left\|{\cdot}\right\|_1)$, because $A=B(0,1)$.</p> <p>I think $A$ is not open if $p&gt;1$. Any hint to show this?</p> <p>Thanks.</p>
njguliyev
90,209
<p>No, $A$ is open for $p&gt;1$. Let $g \in A$ and $\varepsilon = \frac12\left(1-\int_0^1|g(x)|dx\right)$. Then for any $f \in C([0,1])$ with $\|f\|_p &lt; \varepsilon$ we have $$\int_0^1|g(x)+f(x)|dx \le \int_0^1|g(x)|dx+\int_0^1|f(x)|dx \le \int_0^1|g(x)|dx+\|f\|_p &lt; 1.$$</p>
1,441,905
<blockquote> <p>Find the range of values of $p$ for which the line $ y=-4-px$ does not intersect the curve $y=x^{2}+2x+2p$</p> </blockquote> <p>I think I probably have to find the discriminant of the curve but I don't get how that would help.</p>
John_dydx
82,134
<p>Equate the two expressions for $y$ and then re-arrange to form a quadratic equation</p> <p>For the equation to have no real root, the discrimant $b^2 - 4ac &lt; 0$ must hold. From here, you can find the range of values for $p$ for which the line does not meet the curve. </p>
1,915,450
<p>Can anyone help me to prove this? This is given as a fact, but I don't understand why it is true.</p> <blockquote> <p>For an integer $n$ greater than 1, let the prime factorization of $n$ be $$n=p_1^ap_2^bp_3^cp_4^d...p_k^m$$ Where a, b, c, d, ... and m are nonegative integers, $p_1, p_2, ..., p_k$ are prime numbers. The number of divisors is $$d(n)=(a+1)(b+1)(c+1)....(m+1)$$</p> </blockquote>
SchrodingersCat
278,967
<p>Consider that all possible divisors of $n$ can be created by choosing from $p_1,p_2, \ldots, p_k$ in appropriate numbers.</p> <p>So, for creating a particular divisor, we can choose $1$ $p_1$ or $2$ $p_1$'s or $3$ $p_1$'s and so on till a choice of all $a$ $p_1$'s. Then again we have the choice of not choosing any $p_1$ at all. So this amounts to $a+1$ choices.</p> <p>Similarly for $p_2$, we have $b+1$ choices and in this way we can finally conclude that, for any $p_r$, $r=1,2,3 \ldots k$, we have $t'+1$ choices where $p_r$ is raised to the power $t'$ in the prime factorisation of $n$.</p> <p>Finally since all $p_i$'s are distinct, we need to multiply the choices to get $d(n)$.</p> <p>So total number of divisors possible $=d(n)=(a+1)(b+1)\ldots (m+1)$</p>
4,327,729
<p>In this question, I would like to investigate the location of the absolute value in the arcsecant integral.</p> <p>Following <a href="https://math.stackexchange.com/questions/3735966/why-the-derivative-of-inverse-secant-has-an-absolute-value">this answer</a> and <a href="https://math.stackexchange.com/questions/1449228/deriving-the-derivative-formula-for-arcsecant-correctly?rq=1">this answer</a>, we know the following is true: <span class="math-container">$$ \frac{d}{dx}\sec^{-1}(x)=\frac{1}{|x|\sqrt{x^2-1}}. $$</span> Taking indefinite integrals of both sides, we get <span class="math-container">$$ \sec^{-1}(x)+C=\int \frac{1}{|x|\sqrt{x^2-1}} dx $$</span> How can one use this result to deduce that <span class="math-container">$$ \sec^{-1}(|x|)+C=\int \frac{1}{x\sqrt{x^2-1}} dx? $$</span> (Notice that the absolute values used to be in the denominator of the right hand side, and now they are in the argument of the <span class="math-container">$\sec^{-1}$</span> function.)</p> <p>What logical rule of deduction allows us to interchange the location of the absolute values on opposite sides of this equation?</p>
ryang
21,813
<p><a href="https://i.stack.imgur.com/SK1n5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SK1n5.png" alt="enter image description here" /></a></p> <p>For nonnegative <span class="math-container">$x,$</span> the required implication is self-evident; so, let's suppose that <span class="math-container">$x$</span> is negative, in which case <span class="math-container">$|x|=-x.$</span> Then, noting that <span class="math-container">$\left(\sec^{-1}(x)-\frac\pi2\right)$</span> is an odd function,</p> <p><span class="math-container">\begin{align}\sec^{-1}(|x|)&amp;=\sec^{-1}(-x)\\&amp;=\left(\sec^{-1}(-x)-\frac\pi2\right)+\frac\pi2\\&amp;=-\left(\sec^{-1}(x)-\frac\pi2\right)+\frac\pi2\\&amp;=-\sec^{-1}(x)+\pi\\&amp;=-\left(\int \frac{1}{|x|\sqrt{x^2-1}} \,\mathrm dx-C\right)+\pi\\&amp;=\int \frac{1}{x\sqrt{x^2-1}} \,\mathrm dx+C+\pi\\\sec^{-1}(|x|)+C_2&amp;=\int \frac{1}{x\sqrt{x^2-1}} \,\mathrm dx.\end{align}</span></p>
2,868,047
<p>My question is in relation to a problem I am trying to solve <a href="https://math.stackexchange.com/questions/2867002/finding-mathbbpygx">here</a>. If $g(.)$ is a monotonically increasing function and $a &lt;b$, is it always true that $a&lt;g(a)&lt;g(b)&lt;b$? Why or why not?</p>
Sentient
193,446
<p>If I understand your problem correctly, you're looking for a generalization of the binomial distribution to $n &gt; 2$ where $n$ is the number of possible classes. The term for this is <a href="https://en.wikipedia.org/wiki/Multinomial_distribution" rel="nofollow noreferrer">a multinomial distribution</a>.</p> <p>We can view a $n = 2$ side analog of a dice as a coin. The probability of a single outcome is a Bernoulli distribution (e.g. $\Pr(H)$). The probability of multiple outcomes is a Binomial distribution (e.g. $\Pr(HT)$).</p> <p>For $n &gt; 2$, the probability of a single outcome is a Categorical distribution (e.g. $\Pr(1)$). The probability of multiple outcomes is a Multinomial distribution (e.g. $\Pr(1126)$).</p> <p>Remember that a multinomial distribution will only tell you the distribution of, say 5 of the rolls being $1$. So if you're looking for pairs or triplets regardless of the class -- whether its 1 or 2, you don't care -- you have to multiply your probability (if its a fair dice) or construct a sum in the case the dice is not fair.</p>
2,868,047
<p>My question is in relation to a problem I am trying to solve <a href="https://math.stackexchange.com/questions/2867002/finding-mathbbpygx">here</a>. If $g(.)$ is a monotonically increasing function and $a &lt;b$, is it always true that $a&lt;g(a)&lt;g(b)&lt;b$? Why or why not?</p>
awkward
76,172
<p>The OP is correct in saying that this is a generalization of the Birthday Problem and that it is harder because we are interested in cases where the number of people with the same birthday is greater than two. One approach is by way of exponential generating functions.</p> <p>Let's take a concrete example. Suppose we want to roll $10$ six-sided dice and find the probability that at least one number comes up $5$ or more times. It seems easier to consider the complementary event, i.e. all numbers come up $4$ or fewer times. There are $6^{10}$ possible sequences of die rolls, all of which we assume are equally likely. We would like to count the number of sequences in which no number comes up more than $4$ times. We generalize the problem a bit and think about a sequence of $n$ die rolls, and let the number of sequences in which no number appears more than $4$ times be $a_n$. </p> <p>The exponential generating function of $a_n$ is defined to be $$f(x) = \sum_{n=0}^{\infty} \frac{1}{n!} a_n x^n$$ Since each die has $6$ sides and each number appears no more than $4$ times, $$f(x) = \left( \sum_{i=0}^4 \frac{1}{i!} x^i \right)^6$$ The number we want in our problem, $a_{10}$, is $10!$ times the coefficient of $x^{10}$ when $f(x)$ is expanded. I suppose a pencil and paper solution is possible, but I took the easy way out and used a computer algebra system to find that the coefficient of $x^{10}$ is $2177 / 144$. Therefore the number of sequences of $10$ die rolls in which no number appears more than $4$ times is $$a_{10} = 10! \times \frac{2177}{ 144} = 54,860,400$$</p> <p>So the probability that no number appears more than $4$ times in $10$ rolls is $$p = \frac{a_{10}}{6^{10}} = 0.907291$$ and the probability that at least one number appears $5$ times or more is $$1-p = 0.0927093$$</p>
2,011,181
<blockquote> <p><strong>Question:</strong> Find the area of the shaded region given $EB=2,CD=3,BC=10$ and $\angle EBC=\angle BCD=90^{\circ}$.</p> </blockquote> <p><a href="https://i.stack.imgur.com/BFf2h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BFf2h.jpg" alt="Diagram"></a></p> <p>I first dropped an altitude from $A$ to $BC$ forming two cases of similar triangles. Let the point where the altitude meets $BC$ be $X$. Thus, we have$$\triangle BAX\sim\triangle BDC\\\triangle CAX\sim\triangle CEB$$ Using the proportions, we get$$\frac {BA}{BD}=\frac {AX}{CD}=\frac {BX}{BC}\\\frac {CA}{CE}=\frac {AX}{EB}=\frac {CX}{CB}$$ But I'm not too sure what to do next from here. I feel like I'm <em>very</em> close, but I just can't figure out $AX$.</p>
Community
-1
<p><strong>HINT</strong></p> <p>$\frac {BA}{BD} = \frac {BE}{BE + CD}$ because $\triangle AEB\sim\triangle ACD$</p>
4,219,303
<p>I'm trying to solve <span class="math-container">$y''-3y^2 =0$</span>, i use the substitution <span class="math-container">$w=\frac{dy}{dx}$</span>.</p> <p>Using the chain rule i have: <span class="math-container">$$\frac{d^2y}{dx^2} = \frac{dw}{dx} = \frac{dw}{dy}\cdot \frac{dy}{dx} = w \cdot \frac{dw}{dy}$$</span> So i can build the system: <span class="math-container">$$w\cdot \frac{dw}{dy} = 3y^2$$</span> <span class="math-container">$$\frac{dy}{dx} = w$$</span></p> <p>But i'm not sure how to solve the system. I appreciate your help.</p>
Yiorgos S. Smyrlis
57,021
<p><span class="math-container">$$ y''-3y^2=0\quad\Longrightarrow\quad y'y''-3y^2y'=0 \quad\Longrightarrow\quad \frac{1}{2}(y')^2-y^3=c \quad\Longrightarrow\quad y'=\pm\sqrt{2y^3+c'}. $$</span></p>
19,356
<p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p> <p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p> <p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p> <p>Some other guesses:</p> <ol> <li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li> <li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li> </ol> <p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p> <p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
Dominic van der Zypen
8,628
<p>In earlier times, it seems that great emphasis was put on technical calculation, such as checking for convergence, handling logarithms, and so on (this has been pointed out in an earlier answer).</p> <p>As for today, the only thing I am convinced every mathematician should know is to formulate correct proofs and be able to come up without hesitation with <strong>correct and readable proofs</strong> for simple statements such as</p> <blockquote> <blockquote> <p>If $G,H$ are groups, and $f:G\to H$ is a group homomorphism, then $\text{ker}(f) =\{g\in G:f(g) = 1_H\}$ is a subgroup of $G$.</p> </blockquote> </blockquote> <p>Mathematicians should all be able to handle quantifiers with care -- for instance see the difference between continuity uniform continuity.</p>
1,995,663
<p>My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it.</p> <p>After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem?</p> <p><a href="https://i.stack.imgur.com/rlVrW.png"><img src="https://i.stack.imgur.com/rlVrW.png" alt="&quot;five color&quot; graph"></a></p>
PellMel
346,817
<p>Although I wouldn't call it an <em>algorithm</em>, you can construct a solution equivalent to those of the other three answers via this rational approach:</p> <ol> <li><p>Assign a color <em>C<sub>1</sub></em> to the outer ring. It's a promising candidate because of the symmetry and topology of the figure.</p></li> <li><p>Observe that</p> <ul> <li>the outer ring has no boundary in common with the inner disk, so <em>C<sub>1</sub></em> can be re-used there</li> <li>each region of the inner disk borders the other two, so these three regions must each have a distinct color</li> </ul></li> <li><p>Therefore choose two more colors <em>C<sub>2</sub></em> and <em>C<sub>3</sub></em>, and assign <em>C<sub>1</sub></em>, <em>C<sub>2</sub></em>, and <em>C<sub>3</sub></em> as the colors of the regions of the inner disk. It doesn't matter which color is assigned to which region, as all arrangements are interconvertible by symmetry operations on the figure, as partially colored in step (1).</p></li> <li><p>Observe that there is exactly one segment of the middle ring that borders both <em>C<sub>2</sub></em> and <em>C<sub>3</sub></em> from the inner disk; it also, perforce, borders <em>C<sub>1</sub></em> from the outer ring. That segment requires a fourth color, <em>C<sub>4</sub></em>.</p></li> <li><p>The color assignments made to this point leave only one choice each (without using a fifth color) for the remaining middle-ring segments other than the one opposite the region assigned in the previous step. Having made those assignments, two alternatives remain for the final region; either can be assigned.</p></li> </ol>
1,995,663
<p>My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it.</p> <p>After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem?</p> <p><a href="https://i.stack.imgur.com/rlVrW.png"><img src="https://i.stack.imgur.com/rlVrW.png" alt="&quot;five color&quot; graph"></a></p>
Tanner Swett
13,524
<p>A few people have commented that all of the answers given so far have been identical up to symmetry (either by exchanging colors, or by using a symmetry of the uncolored diagram). Here's a proof that the answer that everyone has given is the only possible answer, up to symmetry.</p> <p>Let me number the regions, like so:</p> <p><a href="https://i.stack.imgur.com/3k0Q8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3k0Q8.png" alt="Numbered map"></a></p> <p>Without loss of generality, assume that region 1 is red, region 2 is green, and region 3 is blue.</p> <p>Is region 10 yellow? I will prove that it is not. Suppose that region 10 <em>is</em> yellow. Then since region 5 borders regions 1 (red), 2 (green), and 10 (yellow), region 5 must be blue. Next, since region 6 borders regions 2 (green), 5 (blue), and 10 (yellow), region 6 must be red. Now region 7 borders regions 2 (green), 3 (blue), 6 (red), and 10 (yellow), so it cannot be colored. This proves that region 10 is not yellow.</p> <p>We now know that region 10 must be red, green, or blue. Without loss of generality, assume that region 10 is red.</p> <p>Now we can find:</p> <ul> <li>Region 7 borders regions 2 (green), 3 (blue), and 10 (red). Therefore, region 7 is yellow.</li> <li>Region 6 borders regions 2 (green), 7 (yellow), and 10 (red). Therefore, region 6 is blue.</li> <li>Region 5 borders regions 1 (red), 2 (green), 6 (blue), and 10 (red). Therefore, region 5 is yellow.</li> <li>Region 8 borders regions 3 (blue), 7 (yellow), and 10 (red). Therefore, region 8 is green.</li> <li>Region 9 borders regions 1 (red), 3 (blue), 8 (green), and 10 (red). Therefore, region 9 is yellow.</li> </ul> <p>At this point, the only uncolored region is region 4. Its neighbors are regions 1 (red), 5 (yellow), 9 (yellow), and 10 (red). We can complete the coloring by choosing either green or blue. Both choices will give the same coloring, up to symmetry.</p>
3,251,851
<p>I need help solving these simultaneous equations:</p> <p><span class="math-container">$$a^2 - b^2 = -16$$</span> <span class="math-container">$$2ab = 30$$</span></p>
José Carlos Santos
446,262
<p>They're <em>not</em> the same. The standard basis of <span class="math-container">$\mathbb R^n$</span> is<span class="math-container">$$\bigl\{(1,0,0,\ldots,0,0),(0,1,0,\ldots,0,0),\ldots,(0,0,0,\ldots,1,0),(0,0,0,\ldots,0,1)\bigr\}.$$</span>It turns out that it as <em>an</em> orthonormal basis, but there are others. For instance, <span class="math-container">$\left\{\left(\frac35,\frac45\right),\left(-\frac45,\frac35\right)\right\}$</span> is also an orthonormal basis of <span class="math-container">$\mathbb R^2$</span>.</p>
1,210,018
<p>$$ \begin{bmatrix} 1 &amp; 1 \\ 1 &amp; 1 \\ \end{bmatrix} \begin{Bmatrix} v_1 \\ v_2 \\ \end{Bmatrix}= \begin{Bmatrix} 0 \\ 0 \\ \end{Bmatrix}$$</p> <p>How can i solve this ?</p> <p>I found it $$v_1+v_2=0$$ $$v_1+v_2=0$$ .</p> <p>So i can't solve it for $v_1$ and $v_2$ .</p>
Ruby
227,036
<p>There are infinite solutions. For every $v_1$ there is a $v_2$ which is negative of it.</p>
2,965,821
<p>Let <span class="math-container">$(x_k)_{k\in N}$</span> <span class="math-container">$\subset \mathbb{R^4} $</span>.</p> <p>Then there's this series, which I have to check for convergence and its limit.</p> <p><a href="https://i.stack.imgur.com/o6zpZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o6zpZ.png" alt="enter image description here"></a></p> <p>I think that <span class="math-container">$(-1)^k * k$</span> diverges, because of the geometric series, which is saying that if <span class="math-container">$|q|^n$</span> <span class="math-container">$\geq 1$</span>, the series diverges.</p> <p>Now for the second part we have <span class="math-container">$(-1)^k * (1/k)$</span>, which converges, because <span class="math-container">$1/k$</span> converges towards 0. </p> <p><span class="math-container">$(-1)^k * k^{300}$</span> diverges too, for the same reason like the first series. </p> <p><span class="math-container">$arctan (k) = \pi/2$</span>, because it does not tend to 0 as k tends to infinity the divergence test tells us that the infinite series diverges. </p> <p>I don't know if that is correct at all...</p>
user
505,767
<p><strong>HINT</strong></p> <p>Recall that by chain rule</p> <p><span class="math-container">$$f(x)=e^{g(x)}\implies f'(x)=g'(x)\cdot e^{g(x)}$$</span></p>
2,965,821
<p>Let <span class="math-container">$(x_k)_{k\in N}$</span> <span class="math-container">$\subset \mathbb{R^4} $</span>.</p> <p>Then there's this series, which I have to check for convergence and its limit.</p> <p><a href="https://i.stack.imgur.com/o6zpZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o6zpZ.png" alt="enter image description here"></a></p> <p>I think that <span class="math-container">$(-1)^k * k$</span> diverges, because of the geometric series, which is saying that if <span class="math-container">$|q|^n$</span> <span class="math-container">$\geq 1$</span>, the series diverges.</p> <p>Now for the second part we have <span class="math-container">$(-1)^k * (1/k)$</span>, which converges, because <span class="math-container">$1/k$</span> converges towards 0. </p> <p><span class="math-container">$(-1)^k * k^{300}$</span> diverges too, for the same reason like the first series. </p> <p><span class="math-container">$arctan (k) = \pi/2$</span>, because it does not tend to 0 as k tends to infinity the divergence test tells us that the infinite series diverges. </p> <p>I don't know if that is correct at all...</p>
Mark Bennet
2,906
<p>If <span class="math-container">$y=e^x$</span> then <span class="math-container">$y'=e^x=y$</span> (we can also write <span class="math-container">$\frac {dy}{dx}$</span> instead of <span class="math-container">$y'$</span>, they are different notations for the same thing). This relationship can be used in the chain rule so, for example if <span class="math-container">$y=e^{ax}$</span> then <span class="math-container">$y'=(ax)'e^{ax}=ae^{ax}$</span>, and I use this with <span class="math-container">$a=-1$</span> below.</p> <hr> <p>The importance of the exponential function comes in part because it is the basic solution to <span class="math-container">$y'=y$</span>. Indeed if <span class="math-container">$y'=y$</span> then consider <span class="math-container">$z=ye^{-x}$</span>, which we can differentiate using the chain rule and product rule to give <span class="math-container">$$z'=y'e^{-x}-ye^{-x}=0$$</span> and since the derivative of <span class="math-container">$z$</span> is zero we have <span class="math-container">$z=C$</span> so that <span class="math-container">$ye^{-x}=C$</span> and <span class="math-container">$y=Ce^x$</span>, where <span class="math-container">$C$</span> is any constant.</p>
704,073
<p>I encountered something interesting when trying to differentiate $F(x) = c$.</p> <p>Consider: $\lim_{x→0}\frac0x$. </p> <p>I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals? I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p> <p>I would appreciate a strong logical argument for why the limit stays at $0$. </p>
Emo
127,234
<p>When we deal with limits we first see the result of the part after the sign $\lim$. In this case we have $\frac{0}{x}=0$. Than we have $\lim_{x\to0}{\frac{0}{x}}=\lim_{x\to0}0=0$.</p>
704,073
<p>I encountered something interesting when trying to differentiate $F(x) = c$.</p> <p>Consider: $\lim_{x→0}\frac0x$. </p> <p>I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals? I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p> <p>I would appreciate a strong logical argument for why the limit stays at $0$. </p>
JMCF125
65,964
<p>The limits are not about a value at a point, but about the values <strong>approaching</strong> that point.</p> <blockquote> <p>I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p> </blockquote> <p>It <strong>is</strong> undefined at that point. However, its "neighbourhood" is defined, and that's what the limit gives you.</p> <p>Thus $\lim_{x\to0}\frac0x=0$ means that $x$ approaching $0$ from both sides results in the same value, $0$, the limit must be defined from both sides and equal, i.e. the function is continuous in there, $\lim_{x\to0^-}\frac0x=\lim_{x\to0^+}\frac0x=\lim_{x\to0}\frac0x=0$.</p> <h2>Edit:</h2> <p>This function ($f(x)=\frac0x$), alike $\frac1x$, is continuous in its domain, $\mathbb R\setminus\{0\}$. (it can't really be continuous for $x=0$, because $0$ is outside it's domain) And so I see the confusion, as $\lim_{x\to0^+}\frac1x=+\infty$. One could naïvely think this is because $\frac10=\infty$, but that is (if taken literally) <a href="https://en.wikipedia.org/wiki/Extended_real_number_line" rel="nofollow">outside</a> the standard definition of the real numbers, and regarding limits, an abuse of notation for $\lim_{x\to0^+}\frac1x$. Unknowingly, one might have expected $\lim_{x\to0}\frac0x$ to be a thing of that sort ($\infty$, or at least undefinition of the limit), but as I said in the beginning, that's not what limits are about.</p> <p>Besides what I have said, I was going to include something with the <a href="https://en.wikipedia.org/wiki/%28%CE%B5,_%CE%B4%29-definition_of_limit" rel="nofollow">$(\varepsilon,\delta)$-definition of limits</a>, but I don't know it very well and it is quite confusing. So, I'll follow the initial infinitesimal spirit of Calculus, now present in the more intuitive <a href="https://en.wikipedia.org/wiki/Non-standard_analysis" rel="nofollow">Non-standard analysis</a>. We know $\forall x\in\mathbb R\setminus\{0\}, \frac0x=0$. Imagine $x$ is a very small number. No matter whether $x$ is positive or negative, and no matter how small, $\frac0x$ will be $0$. That means the stated property is extensible beyond $\mathbb R$. The limit is essentially another interpretation of that, denying numbers smaller than all reals by using numbers without a fixed value ($x\to0$ instead of fixed infinitesimal $x$). Sorry for not adding a formal theoretical basing to this yet, I'll try to do it ASAP.</p>
704,073
<p>I encountered something interesting when trying to differentiate $F(x) = c$.</p> <p>Consider: $\lim_{x→0}\frac0x$. </p> <p>I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals? I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p> <p>I would appreciate a strong logical argument for why the limit stays at $0$. </p>
Kaladin
133,789
<p>The reason is if you look at the function $f(x,y)=y/x$ then when you approach $(0,0)$ this should be invariant to the path you take in your case you walk along the path where $y=0$ (constant $0$) but when approaching over the path $y=x$ then it is constant $1$. So we say it is not defined.</p>
865,598
<p>How can I calculate this value?</p> <p>$$\cot\left(\sin^{-1}\left(-\frac12\right)\right)$$</p>
Jason
164,082
<p>Draw a right triangle (in the x>0,y&lt;0 quadrant) with opposite edge -1 and hypotenuse 2. Then the adjacent side is $\sqrt{2^2-1^2}=\sqrt{3}$. cotangent is the ratio of adjacent side over opposite side.</p>
3,773,856
<p>I'm having trouble with part of a question on Cardano's method for solving cubic polynomial equations. This is a multi-part question, and I have been able to answer most of it. But I am having trouble with the last part. I think I'll just post here the part of the question that I'm having trouble with.</p> <p>We have the depressed cubic equation : <span class="math-container">\begin{equation} f(t) = t^{3} + pt + q = 0 \end{equation}</span> We also have what I believe is the negative of the discriminant : <span class="math-container">\begin{equation} D = 27 q^{2} + 4p^{3} \end{equation}</span> We assume <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are both real and <span class="math-container">$D &lt; 0$</span>. We also have the following polynomial in two variables (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) that results from a variable transformation <span class="math-container">$t = u+v$</span> : <span class="math-container">\begin{equation} u^{3} + v^{3} + (3uv + p)(u+v) + q = 0 \end{equation}</span> You also have the quadratic polynomial equation : <span class="math-container">\begin{equation} x^{2} + qx - \frac{p^{3}}{27} = 0 \end{equation}</span> The solutions to the 2-variable polynomial equation satisfy the following constraints : <span class="math-container">\begin{equation} u^{3} + v^{3} = -q \end{equation}</span> <span class="math-container">\begin{equation} uv = -\frac{p}{3} \end{equation}</span> The first section of this part of the larger question asks to prove that the solutions of the quadratic equation are non-real complex conjugates. Here the solutions to the quadratic are equal to <span class="math-container">$u^{3}$</span> and <span class="math-container">$v^{3}$</span> (this relationship between the quadratic polynomial and the polynomial in two variables was proven in an earlier part of the question). I was able to do this part. The second part of this sub-question is what I'm having trouble with.</p> <p>The question says, let : <span class="math-container">\begin{equation} u = r\cos(\theta) + ir\sin(\theta) \end{equation}</span> <span class="math-container">\begin{equation} v = r\cos(\theta) - ir\sin(\theta) \end{equation}</span> The question then asks the reader to prove that the depressed cubic equation has three real roots : <span class="math-container">\begin{equation} 2r\cos(\theta) \text{ , } 2r\cos\left( \theta + \frac{2\pi}{3} \right) \text{ , } 2r\cos\left( \theta + \frac{4\pi}{3} \right) \end{equation}</span> In an earlier part of the question they had the reader prove that given : <span class="math-container">\begin{equation} \omega = \frac{-1 + i\sqrt{3}}{2} \end{equation}</span> s.t. : <span class="math-container">\begin{equation} \omega^{2} = \frac{-1 - i\sqrt{3}}{2} \end{equation}</span> and : <span class="math-container">\begin{equation} \omega^{3} = 1 \end{equation}</span> that if <span class="math-container">$(u,v)$</span> is a root of the polynomial in two variables then so are : <span class="math-container">$(u\omega,v\omega^{2})$</span> and <span class="math-container">$(u\omega^{2},v\omega)$</span>. I think that the part of the question I'm having trouble with is similar. I suspect that : <span class="math-container">\begin{equation} 2r \cos\left( \theta + \frac{2\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{1} \end{equation}</span> and : <span class="math-container">\begin{equation} 2r \cos\left( \theta + \frac{4\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{2} \end{equation}</span> I have derived that : <span class="math-container">\begin{equation} \omega = \cos(\phi) + i\sin(\phi) \end{equation}</span> where <span class="math-container">$\phi = \frac{2\pi}{3}$</span>. Also : <span class="math-container">\begin{equation} \omega^{2} = \cos(2\phi) + i \sin(2\phi) \end{equation}</span> So that the goal of the question may be to prove equations <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>. I have tried to do this but haven't been able to.</p> <p>Am I approaching this question in the correct way ? If I am approaching it the right way can someone show me how to use trigonometric identities to prove equations #1 and #2 ?</p>
dan_fulea
550,003
<p>Here is my way to see the algebra beyond the solution of the cubic. It is based on the known algebraic identity: <span class="math-container">$$ \tag{$*$} t^3+x^3+y^3-3txy =(t+x+y)(t+\omega x+\omega^2y)(t+\omega^2 x+\omega y)\ . $$</span> Then with the notations from the OP, taking <span class="math-container">$x,y$</span> to be <span class="math-container">$-u,-v$</span>: <span class="math-container">$$ \begin{aligned} 0 &amp;=t^3+pt+q\\ &amp;=t^3-3tuv-x^3-y^3\\ &amp;=(t-u-v)(t-\omega u-\omega^2 v)(t-\omega^2 u+\omega v)\ . \end{aligned} $$</span> So the roots of the cubic are <span class="math-container">$u\omega^k + v\omega^{2k}=u\omega^k + v\bar\omega^k$</span>, for <span class="math-container">$k$</span> among <span class="math-container">$0,1,2$</span>.</p> <p>Now consider <span class="math-container">$u,v$</span> to be <span class="math-container">$r(\cos\theta\pm i\sin\theta)$</span>. Then the root <span class="math-container">$u+v$</span> is immediately seen to be <span class="math-container">$2r\cos \theta$</span>.</p> <p>The other two are equally simple, for instance: <span class="math-container">$$ \begin{aligned} u\omega +v\bar\omega &amp;= r(\cos\theta+ i\sin\theta)(\cos(2\pi/3)+ i\sin(2\pi/3)) \\ &amp;\ + r(\cos\theta- i\sin\theta)(\cos(2\pi/3)- i\sin(2\pi/3)) \\[2mm] &amp;= r(\cos(\theta+2\pi/3) + i\sin(\theta+2\pi/3)) \\ &amp;\ + r(\cos(-\theta-2\pi/3) + i\sin(-\theta-2\pi/3)) \\[2mm] &amp;= 2r\cos(\theta+2\pi/3)\ . \end{aligned} $$</span></p>
2,837,934
<p>A cyclist gets left behind by $500$ meters every $minute$ by motorcyclist, because of that he takes $2$ $hour$ and $42$ $minute$ more than motorcyclist to cover $52$ $km$. Find both of their speed.</p> <p>My approach: $v_2-v_1=30km/h$ (converted 500 meter per minute to km/h)</p> <p>$v_2=52/t$<br> $v_1=52/(t+2.42)$</p> <p>then I plug them in the first formula, seems wrong. How can I solve this?</p>
Ross Millikan
1,827
<p>$2$ hours and $42$ minutes is not $2.42$ hours. It is $2.7$ hours. The denominator in your last equation should be $t+2.7.$ Otherwise you are doing fine.</p>
14,726
<p>I am a tutor for a student and I work with him 7 days a week, for about 2-3 hours a day. The student severely struggles with math, although I am a tutor for every subject (he is in high school). His first chemistry exam he got a 77, and I brought it up to a 96. I studied with him for history and he received 100 on that test. However, his first math test was a 73. I spoke with the parents and told them that the homework he is receiving is very different (in terms of difficulty than the exam). The homework is much easier. So I decided that despite how easy his homework is, I will come up with challenging problems that are similar to his exam problems.</p> <p>He had the test today and said he really messed up on it. I spent 4 hours yesterday and 4 hours the day before preparing with him. I gave him a mock exam, I came up with questions that are similar to his review sheet. I did everything I thought I could to help him. What should I do?</p> <p>I noticed that during the mock exam, he could barely answer any questions without turning to me to ask for clarification and then he would stop midway and though his logic was sometimes correct, he would make silly errors.</p> <p>Is there anything I can do in order to fix this situation? I feel responsible for his bad grade. What more can I offer to help him, or what more can I do in order to make him succeed?</p> <p>I feel like I failed him, and that it's my fault. During the sessions I am very attentive and any small thing he doesn't understand, I make sure he gets it. So I don't know what to do. I've never experienced this before. </p>
Community
-1
<p>A raw score on an exam doesn't mean anything in and of themselves, so we have no context for evaluating what a 73% means. If this is a class where the grading scale is 90%=A, 80%=B, 70%=C, then this student passed his math exam, which doesn't seem like a bad outcome for someone who really struggles in math. Grade inflation notwithstanding, not everyone is going to get an A in every subject.</p> <p><em>I spent 4 hours yesterday and 4 hours the day before preparing with him.</em></p> <p>You don't say whether this is college or high school. If it's college, then the typical expectation would be that for a 4-unit math class, the student would spend about 8 hours a week outside of class on the work. So 8 hours of work (ordinarily independent work) would be a typical week. And 8 hours a week isn't a maximum, it's more like a minimum. For a student who is particularly bad at a certain subject, and who wants to get a good grade like a B, probably more like 10, 15, or 20 hours a week will be needed.</p> <p>Math is a subject that requires constant, steady practice. Mastering most skills (violin, chess, ...) requires about 10 years of steady, intense work. What has this student's work been like for the last 10 years?</p>
14,726
<p>I am a tutor for a student and I work with him 7 days a week, for about 2-3 hours a day. The student severely struggles with math, although I am a tutor for every subject (he is in high school). His first chemistry exam he got a 77, and I brought it up to a 96. I studied with him for history and he received 100 on that test. However, his first math test was a 73. I spoke with the parents and told them that the homework he is receiving is very different (in terms of difficulty than the exam). The homework is much easier. So I decided that despite how easy his homework is, I will come up with challenging problems that are similar to his exam problems.</p> <p>He had the test today and said he really messed up on it. I spent 4 hours yesterday and 4 hours the day before preparing with him. I gave him a mock exam, I came up with questions that are similar to his review sheet. I did everything I thought I could to help him. What should I do?</p> <p>I noticed that during the mock exam, he could barely answer any questions without turning to me to ask for clarification and then he would stop midway and though his logic was sometimes correct, he would make silly errors.</p> <p>Is there anything I can do in order to fix this situation? I feel responsible for his bad grade. What more can I offer to help him, or what more can I do in order to make him succeed?</p> <p>I feel like I failed him, and that it's my fault. During the sessions I am very attentive and any small thing he doesn't understand, I make sure he gets it. So I don't know what to do. I've never experienced this before. </p>
WeCanLearnAnything
7,151
<p>This is very hard to answer without details such as:</p> <ul> <li>Is this student generally a sense-making student or one who just seeks answers? How do you know?</li> <li>Is the student lacking prior knowledge? Many students get high grades for years in math because they rote memorize procedures while understand ~nothing. For example, they can do long division to calculate <span class="math-container">$249\div15.3$</span> but cannot estimate an answer nor can they generate a word problem that might go with it or distinguish it from <span class="math-container">$15.3\div249$</span>. What have you done or are you planning to do to assess for this? Weak prior knowledge is almost always the biggest problems with students who struggle with math but succeed in all other courses.</li> <li>You said "... any small thing he doesn't understand, I make sure he gets it." How are you making sure of this? How do you know he is only appearing to understand? Inferring understanding of mathematical concepts from typical school math is not like push ups. If you do 10 push ups, you can obviously do 10 push ups. If you do 10 division calculations correctly, you might have <em>zero</em> understanding of division.</li> </ul> <p>I can give this advice, though. Generally, it is not a good idea to help students part way through a problem. Make them try their hardest first, then share all their thoughts, and then you both have a clear idea of where they're at. Now, you have the information you need to provide appropriate feedback and hints. After they succeed, then they should try a <em>mix</em> of different types of problems, at least one of which is like the on you just helped them with. [If it isn't a mix, they might just robotically apply the same algorithm without thinking.]</p> <p>Assistance before their best effort means that anything that makes it to the page is a <em>joint</em> effort. Furthermore, students typically ask for help with the hardest part of the question, e.g. deciding if it's permutations, combinations, or neither. If you say "It's permutations", they might then spend 5 minutes hand-calculating an answer, getting it right, then thinking they've mastered it... when really they had no conceptual idea of what was happening at all and would have gotten a zero on the test.</p>
2,248,550
<p>Will be the value in the form of $\frac{"0"}{"0"}$? Do I have to use the L'Hopital rule? Or can I say, that the limit doesn't exist?</p>
mlc
360,141
<p>Rationalize the denominator $$\lim_{\substack{x \rightarrow 0^{+} \\ y \rightarrow 1^{-}}} \frac{x+y-1}{\sqrt{x}-\sqrt{1-y}} \frac{\sqrt{x}+\sqrt{1-y}}{\sqrt{x}+\sqrt{1-y}} = \\ \lim_{\substack{x \rightarrow 0^{+} \\ y \rightarrow 1^{-}}} \frac{(x+y-1)(\sqrt{x}+\sqrt{1-y})}{x-(1-y)} = \\ \lim_{\substack{x \rightarrow 0^{+} \\ y \rightarrow 1^{-}}} \sqrt{x}+\sqrt{1-y} = 0$$</p>
165,582
<p>The three lines intersect in the point $(1, 1, 1)$: $(1 - t, 1 + 2t, 1 + t)$, $(u, 2u - 1, 3u - 2)$, and $(v - 1, 2v - 3, 3 - v)$. How can I find three planes which also intersect in the point $(1, 1, 1)$ such that each plane contains one and only one of the three lines?</p> <p>Using the equation for a plane $$a_i x + b_i y + c_i z = d_i,$$ I get $9$ equations.</p> <p>Sharing equations with the lines:</p> <p>$$a_1(1 - t) + b_1(1 + 2t) + c_1(1 + t) = d_1,$$ $$a_2(u) + b_2(2u - 1) + c_2(3u - 2) = d_2,$$ $$a_3(v - 1) + b_3(2v - 3) + c_3(3 - v) = d_3.$$</p> <p>Intersection at $(1,1,1)$: $$a_1 + b_1 + c_1 = d_1,$$ $$a_2 + b_2 + c_2 = d_2,$$ $$a_3 + b_3 + c_3 = d_3.$$</p> <p>Dot product of plane normals and line vectors is $0$ since perpendicular: $$\langle a_1, b_1, c_1 \rangle \cdot \langle -1, 2, 1 \rangle = -a_1 + 2b_1 + c_1 = 0,$$ $$\langle a_2, b_2, c_2\rangle \cdot \langle 1, 2, 3\rangle = a_2 + 2b_2 + 3c_2 = 0,$$ $$\langle a_3, b_3, c_3 \rangle \cdot \langle 1, 2, -1 \rangle = a_3 + 2b_3 - c_3 = 0.$$</p> <p>I know how to find the intersection of $3$ planes using matrices/row reduction, and I know some relationships between lines and planes. However, I seem to come up with $12$ unknowns and $9$ equations for this problem. I know the vectors for the lines must be perpendicular to the normals of the planes, thus the dot product between the two should be $0$. I also know that the planes pass through the point $(1,1,1)$ and the $x,y,z$ coordinates for the parameters given in the line equations. What information am I missing? Maybe there are multiple solutions. If so, how can these planes be described with only a line and one point? Another thought was to convert the planes to parametric form, but to describe a plane with parameters normally I would have $2$ vectors and one point, but here I only have one vector and one point.</p>
Robert Israel
8,508
<p>$2k(k+5)/2 - k(k+3)=2k$ with $a=k(k+5)/2$, $x=k$, $y=k$, would mean $b=k+3$, not $k(k+3)$. The conclusion is that $(k(k+5)/2,k+3) | 2k$, not $2$. For example, with $k=3$, $(24,6)=6$. </p>
165,582
<p>The three lines intersect in the point $(1, 1, 1)$: $(1 - t, 1 + 2t, 1 + t)$, $(u, 2u - 1, 3u - 2)$, and $(v - 1, 2v - 3, 3 - v)$. How can I find three planes which also intersect in the point $(1, 1, 1)$ such that each plane contains one and only one of the three lines?</p> <p>Using the equation for a plane $$a_i x + b_i y + c_i z = d_i,$$ I get $9$ equations.</p> <p>Sharing equations with the lines:</p> <p>$$a_1(1 - t) + b_1(1 + 2t) + c_1(1 + t) = d_1,$$ $$a_2(u) + b_2(2u - 1) + c_2(3u - 2) = d_2,$$ $$a_3(v - 1) + b_3(2v - 3) + c_3(3 - v) = d_3.$$</p> <p>Intersection at $(1,1,1)$: $$a_1 + b_1 + c_1 = d_1,$$ $$a_2 + b_2 + c_2 = d_2,$$ $$a_3 + b_3 + c_3 = d_3.$$</p> <p>Dot product of plane normals and line vectors is $0$ since perpendicular: $$\langle a_1, b_1, c_1 \rangle \cdot \langle -1, 2, 1 \rangle = -a_1 + 2b_1 + c_1 = 0,$$ $$\langle a_2, b_2, c_2\rangle \cdot \langle 1, 2, 3\rangle = a_2 + 2b_2 + 3c_2 = 0,$$ $$\langle a_3, b_3, c_3 \rangle \cdot \langle 1, 2, -1 \rangle = a_3 + 2b_3 - c_3 = 0.$$</p> <p>I know how to find the intersection of $3$ planes using matrices/row reduction, and I know some relationships between lines and planes. However, I seem to come up with $12$ unknowns and $9$ equations for this problem. I know the vectors for the lines must be perpendicular to the normals of the planes, thus the dot product between the two should be $0$. I also know that the planes pass through the point $(1,1,1)$ and the $x,y,z$ coordinates for the parameters given in the line equations. What information am I missing? Maybe there are multiple solutions. If so, how can these planes be described with only a line and one point? Another thought was to convert the planes to parametric form, but to describe a plane with parameters normally I would have $2$ vectors and one point, but here I only have one vector and one point.</p>
Bill Dubuque
242
<p>Let $\rm\:j = (k\!+\!3,\, k(k\!+\!5)/2).\,$ The solvability criterion is $\rm\,j\:|\:2k,\,$ not $\rm\:j\:|\:2.\,$ The two are equivalent only when $\rm\:(j,k) = 1\iff (k,3) = 1.$ Otherwise $\rm\:3\:|\:k\:|\:j\:$ thus $\rm \:j\nmid 2.$</p>
2,375,298
<p>My question is as follows:</p> <p>I have four different die and I'm trying to figure out how many possible combinations there are of (6,6,6,3)</p> <p>My intuition tells me that there are 24 combinations. I'm imagining we have 4 spots:</p> <hr> <p>For the first spot there are 4 options (6,6,6,3) For the second spot there are 3 options, etc.</p> <p>I believe this is wrong but I can't figure out the flaw in my reasoning.</p> <p>Any help would be appreciated. Thanks!</p>
Robert Israel
8,508
<p>If $x_n$ is a weakly Cauchy sequence for each $n$, and $x_n \to x$ in $\ell_\infty(X)$, i.e. $x_n(j)$ converges uniformly to $x(j)$ as $n \to \infty$, I claim $x$ is weakly Cauchy. If not, there is $y \in X^*$ with (for convenience) $\|y\| \le 1$ such that $y(x(j))$ is not a Cauchy sequence, i.e. there is $\epsilon &gt; 0$ such that for all $N$ there exist $j,k &gt; N$ with $|y(x(j)) - y(x(k))| &gt; \epsilon$. But if $n$ is large enough, $\|x - x_n\|_\infty &lt; \epsilon/3$, thus $\|x(j) - x_n(j)\| &lt; \epsilon/3$ for all $j$, and then $$|y(x_n(j)) -y(x_n(k))| \ge | y(x(j)) - y(x(k)| - |y(x_n(j)) - y(x(j))| - |y(x(k)) - y(x_n(k))|&gt; \frac{\epsilon}3$$ contradicting the assumption that $x_n$ is weakly Cauchy.</p>
4,328,630
<p>While preparing for a midterm, I came across this question</p> <blockquote> <p>Suppose a restaurant is visited by 10 clients per hour on average, and clients follow a homogeneous Poisson Process. Independantly of other client, each client has a 20% chance to eat here and 80% to take away. In average, how many clients should be expected before one eats here ? </p> </blockquote> <p>Proposed answer :</p> <ul> <li>8</li> <li>4</li> <li>2</li> </ul> <p>For me the correct answer is 4, but many of friends have answered &quot;2&quot; because they've decomposed the poisson into two poisson process one of parameter 0.2<em>10 and another one with parameter 0.8</em>10.</p> <p>Who is right? The question is really tricky, isn't it?</p> <p>Thanks for you help !</p>
epi163sqrt
132,007
<p>We look somewhat more detailed at OPs first identity and calculate <span class="math-container">\begin{align*} \frac{\partial\left(\boldsymbol {EJE}^{T}\right)}{\partial\boldsymbol {E}} \end{align*}</span></p> <p>The matrix derivation used in OPs first cited <em><a href="https://lewisgroup.uta.edu/ee5329/lectures/Brewer%20Kronecker.pdf" rel="nofollow noreferrer">paper</a></em> is based on W.J Vetters definition. Let <span class="math-container">$X=(x_{ij})$</span> be a matrix of order <span class="math-container">$(r\times s)$</span>. Let <span class="math-container">$Y$</span> be a matrix of order <span class="math-container">$(p\times q)$</span>. We define the derivative of a matrix <span class="math-container">$Y$</span> with respect to a matrix <span class="math-container">$X$</span> as partitioned matrix <span class="math-container">\begin{align*} \frac{\partial\boldsymbol{Y}}{\partial\boldsymbol{X}}:= \begin{pmatrix} \frac{\partial \boldsymbol{Y}}{\partial x_{11}}&amp;\frac{\partial \boldsymbol{Y}}{\partial x_{12}}&amp;\cdots&amp;\frac{\partial \boldsymbol{Y}}{\partial x_{1n}}\\ \frac{\partial \boldsymbol{Y}}{\partial x_{21}}&amp;\frac{\partial \boldsymbol{Y}}{\partial x_{22}}&amp;\cdots&amp;\frac{\partial \boldsymbol{Y}}{\partial x_{2n}}\\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots\\ \frac{\partial \boldsymbol{Y}}{\partial x_{m1}}&amp;\frac{\partial \boldsymbol{Y}}{\partial x_{m2}}&amp;\cdots&amp;\frac{\partial \boldsymbol{Y}}{\partial x_{mn}}\\ \end{pmatrix} =\sum_{i=1}^{r}\sum_{j=1}^{s}\boldsymbol{E}_{ij}^{(r\times s)}\otimes\frac{\partial\boldsymbol{Y}}{\partial\,x_{ij}}\tag{1} \end{align*}</span> The matrix <span class="math-container">$\boldsymbol{E}_{ij}^{(r\times s)}$</span> is called <em>elementary matrix</em>. It has order <span class="math-container">$(r\times s)$</span>, a <span class="math-container">$1$</span> at position <span class="math-container">$(i,j)$</span> and is zero at all other positions.</p> <blockquote> <p>Let <span class="math-container">$\boldsymbol{X}$</span> be matrix of order <span class="math-container">$(r\times s)$</span>, <span class="math-container">$\boldsymbol{Y}$</span> of order <span class="math-container">$(p\times q)$</span> and <span class="math-container">$\boldsymbol{Z}$</span> of order <span class="math-container">$(q\times u)$</span>. We obtain <span class="math-container">\begin{align*} \color{blue}{\frac{\partial (\boldsymbol{Y}\boldsymbol{Z})}{\partial \boldsymbol{X}}} &amp;=\sum_{i=1}^{r}\sum_{j=1}^{s}\boldsymbol{E}_{ij}^{(r\times s)}\otimes \frac{\partial\left(\boldsymbol{Y}\boldsymbol{Z}\right)}{\partial x_{ij}}\tag{2.1}\\ &amp;=\sum_{i,j}\boldsymbol{E}_{ij}^{(r\times s)}\otimes \left(\frac{\partial\boldsymbol{Y}}{\partial x_{ij}}\boldsymbol{Z}+\boldsymbol{Y}\frac{\partial\boldsymbol{Z}}{\partial x_{ij}}\right)\tag{2.2}\\ &amp;=\sum_{i,j}\boldsymbol{E}_{ij}^{(r\times s)}\otimes \left(\frac{\partial\boldsymbol{Y}}{\partial x_{ij}}\boldsymbol{Z}\right)+\sum_{i,j}\boldsymbol{E}_{ij}^{(r\times s)}\otimes \left(\boldsymbol{Y}\frac{\partial\boldsymbol{Z}}{\partial x_{ij}}\right)\tag{2.3}\\ &amp;=\sum_{i,j}\left(\boldsymbol{E}_{ij}^{(r\times s)}\boldsymbol{I}_s\right)\otimes \left(\frac{\partial\boldsymbol{Y}}{\partial x_{ij}}\boldsymbol{Z}\right)\\ &amp;\qquad+\sum_{i,j}\left( \boldsymbol{I}_r\boldsymbol{E}_{ij}^{(r\times s)}\right)\otimes \left(\boldsymbol{Y}\frac{\partial\boldsymbol{Z}}{\partial x_{ij}}\right)\tag{2.4}\\ &amp;=\sum_{i,j}\left(\boldsymbol{E}_{ij}^{(r\times s)}\otimes\frac{\partial\boldsymbol{Y}}{\partial x_{ij}}\right) \left(\boldsymbol{I}_s\otimes\boldsymbol{Z}\right)\\ &amp;\qquad+\sum_{i,j}\left( \boldsymbol{I}_r\otimes\boldsymbol{Y}\right) \left(\boldsymbol{E}_{ij}^{(r\times s)}\otimes\frac{\partial\boldsymbol{Z}}{\partial x_{ij}}\right)\tag{2.5}\\ &amp;\,\,\color{blue}{=\frac{\partial \boldsymbol{Y}}{\partial \boldsymbol{X}}\left(\boldsymbol{I}_s\otimes\boldsymbol{Z}\right) +\left( \boldsymbol{I}_r\otimes\boldsymbol{Y}\right)\frac{\partial \boldsymbol{Z}}{\partial \boldsymbol{X}}}\tag{2.6}\\ \end{align*}</span></p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (2.1) we use the representation (1).</p> </li> <li><p>In (2.2) we use the product rule of derivation.</p> </li> <li><p>In (2.3) we use the identity <span class="math-container">$(\boldsymbol{A}+ \boldsymbol{B})\otimes \boldsymbol{C} =(\boldsymbol{A}\otimes \boldsymbol{C})+(\boldsymbol{B}\otimes \boldsymbol{C})$</span>.</p> </li> <li><p>In (2.4) we use the identity <span class="math-container">$\boldsymbol{I}_r\boldsymbol{A}^{(r\times s)}=\boldsymbol{A}^{(r\times s)}=\boldsymbol{A}^{(r\times s)}\boldsymbol{I}_s$</span>.</p> </li> <li><p>In (2.5) we use the identity <span class="math-container">$(\boldsymbol{A}\otimes\boldsymbol{C})(\boldsymbol{B}\otimes\boldsymbol{D}) =(\boldsymbol{A}\boldsymbol{B}\otimes\boldsymbol{C}\boldsymbol{D})$</span></p> </li> <li><p>In (2.6) we use again the representation from (1).</p> </li> </ul> <blockquote> <p>Based on the identity (2.6) we consider a matrix <span class="math-container">$X$</span> of order <span class="math-container">$(r\times s)$</span>, <span class="math-container">$Y$</span> of order <span class="math-container">$(s\times s)$</span> and obtain <span class="math-container">\begin{align*} \color{blue}{\frac{\partial (\boldsymbol{X}\boldsymbol{Y}\boldsymbol{X}^T)}{\partial \boldsymbol{X}}} &amp;=\frac{\partial (\boldsymbol{X}\boldsymbol{Y})}{\partial \boldsymbol{X}} \left(\boldsymbol{I}_s\otimes\boldsymbol{X}^T\right) +\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\boldsymbol{Y}\right) \frac{\partial \boldsymbol{X}^T}{\partial \boldsymbol{X}}\\ &amp;=\left(\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{X}}\left(\boldsymbol{I}_s\otimes\boldsymbol{Y}\right) +\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\right)\frac{\partial \boldsymbol{Y}}{\partial \boldsymbol{X}}\right) \left(\boldsymbol{I}_s\otimes\boldsymbol{X}^T\right)\\ &amp;\qquad+\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\boldsymbol{Y}\right) \frac{\partial \boldsymbol{X}^T}{\partial \boldsymbol{X}}\\ &amp;=\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{X}}\left(\boldsymbol{I}_s\otimes\boldsymbol{Y}\right) \left(\boldsymbol{I}_s\otimes\boldsymbol{X}^T\right) +\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\right)\frac{\partial \boldsymbol{Y}}{\partial \boldsymbol{X}} \left(\boldsymbol{I}_s\otimes\boldsymbol{X}^T\right)\\ &amp;\qquad+\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\boldsymbol{Y}\right) \frac{\partial \boldsymbol{X}^T}{\partial \boldsymbol{X}}\\ &amp;\,\,\color{blue}{=\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{X}}\left(\boldsymbol{I}_s\otimes\boldsymbol{Y}\boldsymbol{X}^T\right) +\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\right)\frac{\partial \boldsymbol{Y}}{\partial \boldsymbol{X}} \left(\boldsymbol{I}_s\otimes\boldsymbol{X}^T\right)}\\ &amp;\qquad\color{blue}{+\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\boldsymbol{Y}\right) \frac{\partial \boldsymbol{X}^T}{\partial \boldsymbol{X}}}\tag{3} \end{align*}</span></p> </blockquote> <p><em>Hint:</em> This notation of matrix calculus presented nicely in OPs cited paper by J.W. Brewer can also be found in <em><a href="https://www.maa.org/press/maa-reviews/kronecker-products-and-matrix-calculus-with-applications" rel="nofollow noreferrer">Kronecker Products &amp; Matrix Calculus with Applications</a></em> by A. Graham.</p> <p>We can simplify (3) somewhat by introducing the <em>permutation matrix</em> <span class="math-container">$\boldsymbol{U}^{(r\times s)}$</span> which is of order <span class="math-container">$(r\times s)$</span> and which has precisely a <span class="math-container">$1$</span> in each row and in each column and is zero otherwise.</p> <p>We get a permutation matrix <span class="math-container">\begin{align*} \color{blue}{\frac{\partial \boldsymbol{X}^T}{\partial \boldsymbol{X}}} &amp;=\sum_{i,j}E_{ij}^{(r\times s)}\otimes \frac{\partial \boldsymbol{X}^T}{\partial x_{ij}}\\ &amp;=\sum_{i,j}E_{ij}^{(r\times s)}\otimes E_{ji}^{(r\times s)} \color{blue}{=: \boldsymbol{U}} \end{align*}</span> and the related matrix <span class="math-container">\begin{align*} \color{blue}{\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{X}}} &amp;=\sum_{i,j}E_{ij}^{(r\times s)}\otimes \frac{\partial \boldsymbol{X}}{\partial x_{ij}}\\ &amp;=\sum_{i,j}E_{ij}^{(r\times s)}\otimes E_{ij}^{(r\times s)} \color{blue}{=: \boldsymbol{\overline{U}}} \end{align*}</span></p> <blockquote> <p>With <span class="math-container">$\boldsymbol{U},\boldsymbol{\overline{U}}$</span> expression (3) can be written as <span class="math-container">\begin{align*} \color{blue}{\frac{\partial (\boldsymbol{X}\boldsymbol{Y}\boldsymbol{X}^T)}{\partial \boldsymbol{X}}} &amp;\,\,\color{blue}{=\boldsymbol{\overline{U}}\left(\boldsymbol{I}_s\otimes\boldsymbol{Y}\boldsymbol{X}^T\right) +\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\right)\frac{\partial \boldsymbol{Y}}{\partial \boldsymbol{X}} \left(\boldsymbol{I}_s\otimes\boldsymbol{X}^T\right)}\\ &amp;\qquad\color{blue}{+\left(\boldsymbol{I}_r\otimes\boldsymbol{X}\boldsymbol{Y}\right) \boldsymbol{U}} \end{align*}</span></p> </blockquote>
270,410
<p>I have simplified the equations and decrease the variables to 5, and changed the parameters' value as I think the equations in <a href="https://mathematica.stackexchange.com/questions/270375/findrootjsing-encountered-a-singular-jacobian-at-the-point-when-solving-nonli">enter link description here</a> is because of the improper parameters' value.</p> <p>and new codes are as this:</p> <pre><code>equa={(AmI[1] - BmI[1]/R3^2) CmI[1] == k1 u0, R3 AmI[1] DmI[1] == (BmI[1] DmI[1])/R3, (AmI[1] - BmI[1]/R2^2) CmI[1] == (R1^(-((2 π)/β)) R2^(-((π + β)/β)) (R1^((2 π)/β) - R2^((2 π)/β)) β Aki[1]* (Sin[thetai] + Sin[thetai + β]))/(-π^2 + β^2), (AmI[1] - BmI[1]/R2^2) DmI[1] == (R1^(-((2 π)/β)) R2^(-((π + β)/β)) (R1^((2 π)/β) - R2^((2 π)/β)) β Aki[1]* (Cos[thetai] + Cos[thetai + β]))/(π^2 - β^2), Aki[1] == -((2 β (R2^2 AmI[1] + BmI[1]) ((Cos[thetai] + Cos[thetai + β]) DmI[1] - CmI[1] (Sin[thetai] + Sin[thetai + β])))/(R2 (π - β) (π + β)))} system = equa; vars = {AmI[1], BmI[1], CmI[1], DmI[1], Aki[1]}; parameters = {u0 -&gt; 4*π*10^(-7), R1 -&gt; 4/100, R2 -&gt; 7/100, R3 -&gt; 8/100, β -&gt; π/4, k1 -&gt; (11/10)^5, L -&gt; 0.1, N1 -&gt; 50, K -&gt; 50, thetai -&gt; π/6}; givenPoint = {{AmI[1], 0.1}, {BmI[1], 0.1}, {CmI[1], 0.1}, {DmI[1], 0.1}, {Aki[1], 0.1 + I}}; NMinimize[# . # &amp;[equa /. Equal -&gt; Subtract /. parameters], vars] </code></pre> <p><code>{1.53176*10^-12, {AmI[1] -&gt; -1.75396*10^-6, BmI[1] -&gt; 8.53681*10^-9, CmI[1] -&gt; -0.410309, DmI[1] -&gt; 0.317123, Aki[1] -&gt; 2.23402*10^-13}}</code></p> <p>And it seems that the object is approximate to 0, however it is not 0, so I cannot get the solution by <code>Solve</code>.</p> <p><strong>The most important question is how to analyze this nonlinear equations mathematically with 5 variables? For example using <code>MatrixRank</code> or other functions to make sure in which condition the equations will and will not have solution.</strong></p> <p><strong>I do not know if it is effective to use <code>MatrixRank</code> for nonlinear equations.</strong></p> <p><strong>By the way, I do not know which of vars should be real and which complex and maybe the initial values are improper.</strong></p>
fhrl
87,268
<p>1, Surely even if <code>DmI[1]=0</code> and <code>Aki[1]=0</code>, there is still one equation that cannot satisfy.</p> <p>2, I have tried to change the first 2 equations,</p> <pre><code> (AmI[1] - BmI[1]/R3^2) CmI[1] == k1 u0, R3 AmI[1] DmI[1] - (BmI[1] DmI[1])/R3==0, </code></pre> <p>Even make <code>k1 u0=0</code> and therefore the first 2 eqautions are equal to 0 respectively, there is no solution.</p> <p>3, Changing the first 2 eqautions to</p> <pre><code>(AmI[1] - BmI[1]/R3^2) CmI[1] == C1, R3 AmI[1] DmI[1] - (BmI[1] DmI[1])/R3==C2, </code></pre> <p>and <code>C1</code>is not equal to 0, <code>C2</code>is not equal to 0, there is still no solution.</p>
151,956
<p>I'm looking for a general method to evaluate expressions of the form</p> <p>$$\frac{\mathrm{d}(u^v)}{\mathrm{d}u}\text{ and }\frac{\mathrm{d}(u^v)}{\mathrm{d}v}\;.$$</p> <p>I know that the answers to these are, respectively, $u^{v-1}v$ and $u^v\mathrm{ln}u$, but am unsure of how to obtain them, and how the chain rule applies here.</p> <p>I'd be very grateful of any enlightenment.</p> <p>With very many thanks,</p> <p>Froskoy.</p>
Gerry Myerson
8,269
<p>In the first one, if $v$ is a constant, then the chain rule is not involved. If $v$ is not a constant, but is a function of $u$, then your formula is wrong, and some form of the chain rule is needed. One way to go about it is $u^{v(u)}=e^{v(u)\log u}$; the derivative is $$e^{v(u)\log u}\times{d\over du}(v(u)\log u)=e^{v(u)\log u}\left({dv\over du}\log u+{v(u)\over u}\right)$$ </p> <p>Similar remarks apply to the second one, this time with the question being whether $u$ is a constant or a function of $v$. </p>
23,268
<p>I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations).</p> <p>A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms).</p> <p>However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow.</p> <p>For another example, I think (and correct me if I am wrong) that <strike>the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits</strike>. [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively.</p> <p>Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.</p>
Buschi Sergio
6,262
<p>consider a truncated cylinder as a succession of circles (say of radius 1): $(C_i)_{i\in [0,1]}$, then consider this as a functor on the set (discrete category) $[0,1]$ to $Set$ (category of sets), the a limit is the class of (no necessarily continuos) curves that are graphs of function $(f, g): [0, 1]\to R^2$ contained in the cylinder (i.e. with $f(t)^2+ g(t)^2\\leq 1\ for\ 1\leq t\leq 1$). If $I$ is a poset then diagram $(C_i)_{i\in I}$ of circles, are circles by diagram funtions connection these. Then a limit is like a path ( connected iff $I$ is connected) from circle to circle that "follow" these functions that have one ond only one crossing for any cirle. </p> <p>About the colimit considering the (unordered) trees maked by points of some circles connected by some diagram function, then the colimit is the set of these (maximal connected) trees. </p> <p>(of course choose "circles" is only for pedagogical reasons)</p>
3,909,005
<p>I would like to ask what are the derivative values (first and second) of a function &quot;log star&quot;: <span class="math-container">$f(n) = \log^*(n)$</span>?</p> <p>I want to calculate some limit and use the De'l Hospital property, so that's why I need the derivative of &quot;log star&quot;: <span class="math-container">$$\lim_{n \to \infty}\frac{\log_{2}^*(n)}{\log_{2}(n)}$$</span></p> <p>More about this function: <a href="https://en.wikipedia.org/wiki/Iterated_logarithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Iterated_logarithm</a></p>
Steven Stadnicki
785
<p>Hint: take <span class="math-container">$n=2^m$</span> in your limit (and make sure you understand why you can do this!). Then (using <span class="math-container">$\lg(x)$</span> for <span class="math-container">$\log_2(x)$</span>, which is a common convention) <span class="math-container">$\lg^*(n)=\lg^*(m)+1$</span>, whereas <span class="math-container">$\lg(n)=m$</span>. So your limit is <span class="math-container">$\lim\limits_{m\to\infty}\dfrac{\lg^*(m)+1}{m}$</span>. But now we can take <span class="math-container">$m=2^r$</span>, similarly, and get <span class="math-container">$\lim\limits_{r\to\infty}\dfrac{\lg^*(r)+2}{2^r}$</span>. And since <span class="math-container">$\lg^*(n)\lt n$</span> for all <span class="math-container">$n\gt 1$</span> (this should be easy to prove) this gives you the limit you're after.</p>
2,921,927
<p>We have the following function : <span class="math-container">$$f(z)=\frac{z^2}{1-\cos z}$$</span> where <span class="math-container">$z_0=0$</span> is a removable singularity since the limit as <span class="math-container">$z$</span> goes to <span class="math-container">$0$</span> is <span class="math-container">$2$</span>. </p> <p>In such cases, in order to find the residue I proceed by trying to find the Laurent series around the singularity and checking the coefficient of the <span class="math-container">$z^{-1}$</span> term. However, the cosine is in the denominator and I can't properly find the first negative power of <span class="math-container">$z$</span>. Will I have to resort to a polynomial division to get my negative <span class="math-container">$z$</span> powers?</p>
Qi Zhu
470,938
<p>Mark's answer is, of course, the best to approach this. Supposing it were not a removable singularity, then what I'll show will still sometimes work. In specific, you'll get the Laurent Series.</p> <p>Recall Geometric Series. $$ \frac{x^2}{1-\cos{x}} = x^2 (1+\cos(x)+\cos(x)^2+\dots) = x^2 \sum_{n=0}^\infty \left( \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!} \right)^n $$ That's your Laurent Series.</p>
4,386,087
<p>There exists an elevator which starts off containing <span class="math-container">$p$</span> passengers.</p> <p>There are <span class="math-container">$F$</span> floors.</p> <p><em><span class="math-container">$\forall i: P_i = $</span>P(i. passenger exists on any of the floors)</em> = <span class="math-container">$1/F$</span>.</p> <p>The passengers exit independently.</p> <p>What's the probability that the elevator door opens on all floors?</p> <p><em>Reasonable assumption: elevator stops on at least one of the floors</em>.</p> <p>Number of all configurations: <span class="math-container">$F^{p}$</span>.</p> <p>It arises as the implication that any of the passengers can exit on any of the floors.</p> <p>Number of favorable configurations will be the difference between the number of all configurations and the number of ways that at least one of the floors are skipped.</p> <p>Therefore the probability in accordance with the principle of inclusion-exclusion:</p> <p><span class="math-container">$\frac{F^{p} - \binom{F}{1}\cdot \left( F - 1\right)^{p} + \binom{F}{2}\cdot \left( F - 2\right)^{p} - ... + \binom{F}{F - 1}\cdot 1^{p}}{F^{p}}$</span>.</p> <p>Now how to proceed from this point?</p>
leonbloy
312
<p>The problem is equivalent to the following (slightly neater) formulation: we place <span class="math-container">$p$</span> balls in <span class="math-container">$F$</span> urns, with uniform probability; which is the probability that all urns are occupied?</p> <p>Your approach and your solution is fine.</p> <p>It can be written as</p> <p><span class="math-container">$$ P = \frac{F! \, S (p,F)}{F^p}.$$</span></p> <p>where <span class="math-container">$S$</span> are the <a href="https://math.stackexchange.com/questions/26528/m-balls-n-boxes-probability-problem">Stirling number of the second kind</a>.</p>
204,842
<p>A probability measure defined on a sample space $\Omega$ has the following properties:</p> <ol> <li>For each $E \subset \Omega$, $0 \le P(E) \le 1$</li> <li>$P(\Omega) = 1$</li> <li>If $E_1$ and $E_2$ are disjoint subsets $P(E_1 \cup E_2) = P(E_1) + P(E_2)$</li> </ol> <p>The above definition defines a measure that is finitely additive (by induction) but not necessarily countably additive.</p> <p>What is a probability measure that would be finitely additive but not countably additive (for a countable sample space $\Omega$)?</p> <p>The example that I have seen most commonly on forums (this and elsewhere) is to set $P(E) = 0$ if $E$ is finite and $P(E) = 1$ if $E$ is co-finite. But that is <strong>not</strong> a probability measure as defined above since it is not defined on every subset of $\Omega$. </p> <p>So an example of such a probability measure, or what is the reasoning that a finitely additive probability measure is not always countably additive?</p>
Anonymous
57,088
<p>In the following <a href="https://lirias.kuleuven.be/bitstream/123456789/267264/1/DPS1010.pdf">note</a> the author shows that a finitely additive diffused measure on $\mathcal{P}(\omega)$ can be used to define a non Ramsey family. Combining this with a result of Mathias, it follows that it is consistent with $ZFC$ that there is no (ordinal) definable finitely additive diffused total measure on $\mathcal{P}(\omega)$.</p>
4,280,424
<p>The PDE: <span class="math-container">$$\frac1D C_t-Q=\frac2rC_r+C_{rr}$$</span></p> <p>on the domain <span class="math-container">$r \in [0,\bar{R}]$</span> and <span class="math-container">$t \in [0,+\infty]$</span> and where <span class="math-container">$D$</span> and <span class="math-container">$Q$</span> are Real constants. We're looking for a function <span class="math-container">$C(r,t)$</span>.</p> <p>The BC: <span class="math-container">$$C(0,t)=f(t)$$</span> <span class="math-container">$$C_r(\bar{R},t)=0$$</span> The IC: <span class="math-container">$$C(r,0)=C_0$$</span> If <span class="math-container">$f(t)=0$</span> then I know the solution. Assume: <span class="math-container">$$C(r,t)=C_E(r)+v(r,t)$$</span> <span class="math-container">$$-Q=\frac2rC_r+C_{rr}$$</span> <span class="math-container">$$-Q=\frac2rC_E'(r)+C_E''(r)$$</span> <span class="math-container">$$rC_E''+2C_E'+Qr=0$$</span> where <span class="math-container">$C_E(r)$</span> is the steady-state solution (<span class="math-container">$t \to \infty$</span>). <span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}+\frac{c_1}{r}+c_2$$</span> <span class="math-container">$$\text{because }\lim_{r\to 0}C(r)=\infty\Rightarrow c_1=0$$</span> But since as <span class="math-container">$f(t) \neq 0$</span>, <span class="math-container">$c_2$</span> cannot be determined.</p> <p>All help will be appreciated.</p> <hr> <p><strong>Edit.</strong> In the case of <span class="math-container">$f(t)=0$</span> the solution, summarised, becomes:</p> <p><span class="math-container">$$c_2=0$$</span> <span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}$$</span> <span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+v(r,t)$$</span> Compute partial derivatives: <span class="math-container">$$C_t=v_t$$</span> <span class="math-container">$$C_r=-\frac{Qr}{3}+v_r$$</span> <span class="math-container">$$C_{rr}=-\frac{Q}{3}+v_{rr}$$</span> Inserting in the PDE then gives the homogeneous PDE in <span class="math-container">$v(r,t)$</span>: <span class="math-container">$$\frac1D v_t=\frac2r v_r+v_{rr}$$</span> Ansatz: <span class="math-container">$v(r,t)=R(r)T(t)$</span>, then separation of variables yields the ODE solutions, with <span class="math-container">$-m^2$</span> a separation constant: <span class="math-container">$$T(t)=c_3\exp(-m^2 D t)$$</span> <span class="math-container">$$R(r)=c_4\frac{\sin mr}{r}$$</span> BCs: <span class="math-container">$$R(0)=0$$</span> <span class="math-container">$$R'(\bar{R})=0$$</span> <span class="math-container">$$R'=c_4\frac{mr\cos mr-\sin mr}{r^2}$$</span> <span class="math-container">$$R'(\bar{R})=c_4\frac{m\bar{R}\cos m\bar{R}-\sin m\bar{R}}{\bar{R}^2}=0$$</span> The eigenvalues <span class="math-container">$m_i$</span> are the solutions to the transcendental equation: <span class="math-container">$$m_i\bar{R}=\tan m_i\bar{R}$$</span> So we have: <span class="math-container">$$v(r,t)=\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span> Determine the <span class="math-container">$A_i$</span> the usual way with the IC and the Fourier series.</p> <p>So we have:</p> <p><span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span></p>
Atticus Stonestrom
663,661
<p>Yes, it is the case that <span class="math-container">$\operatorname{cl}(A)=\mathbb{R}$</span>. Here is an argument that works in more general contexts: by definition, a subset <span class="math-container">$X\subseteq\mathbb{R}$</span> is closed if and only if either <span class="math-container">$X$</span> is countable or <span class="math-container">$X=\mathbb{R}$</span>. Now, <span class="math-container">$\operatorname{cl}(A)$</span> is a closed set containing <span class="math-container">$A$</span>; since <span class="math-container">$A$</span> is uncountable, this leaves only one option for what <span class="math-container">$\operatorname{cl}(A)$</span> can be: namely, <span class="math-container">$\mathbb{R}$</span> itself. More generally, this same argument shows that <span class="math-container">$\operatorname{cl}(B)=\mathbb{R}$</span> for any uncountable subset <span class="math-container">$B\subseteq\mathbb{R}$</span>.</p>
4,280,424
<p>The PDE: <span class="math-container">$$\frac1D C_t-Q=\frac2rC_r+C_{rr}$$</span></p> <p>on the domain <span class="math-container">$r \in [0,\bar{R}]$</span> and <span class="math-container">$t \in [0,+\infty]$</span> and where <span class="math-container">$D$</span> and <span class="math-container">$Q$</span> are Real constants. We're looking for a function <span class="math-container">$C(r,t)$</span>.</p> <p>The BC: <span class="math-container">$$C(0,t)=f(t)$$</span> <span class="math-container">$$C_r(\bar{R},t)=0$$</span> The IC: <span class="math-container">$$C(r,0)=C_0$$</span> If <span class="math-container">$f(t)=0$</span> then I know the solution. Assume: <span class="math-container">$$C(r,t)=C_E(r)+v(r,t)$$</span> <span class="math-container">$$-Q=\frac2rC_r+C_{rr}$$</span> <span class="math-container">$$-Q=\frac2rC_E'(r)+C_E''(r)$$</span> <span class="math-container">$$rC_E''+2C_E'+Qr=0$$</span> where <span class="math-container">$C_E(r)$</span> is the steady-state solution (<span class="math-container">$t \to \infty$</span>). <span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}+\frac{c_1}{r}+c_2$$</span> <span class="math-container">$$\text{because }\lim_{r\to 0}C(r)=\infty\Rightarrow c_1=0$$</span> But since as <span class="math-container">$f(t) \neq 0$</span>, <span class="math-container">$c_2$</span> cannot be determined.</p> <p>All help will be appreciated.</p> <hr> <p><strong>Edit.</strong> In the case of <span class="math-container">$f(t)=0$</span> the solution, summarised, becomes:</p> <p><span class="math-container">$$c_2=0$$</span> <span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}$$</span> <span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+v(r,t)$$</span> Compute partial derivatives: <span class="math-container">$$C_t=v_t$$</span> <span class="math-container">$$C_r=-\frac{Qr}{3}+v_r$$</span> <span class="math-container">$$C_{rr}=-\frac{Q}{3}+v_{rr}$$</span> Inserting in the PDE then gives the homogeneous PDE in <span class="math-container">$v(r,t)$</span>: <span class="math-container">$$\frac1D v_t=\frac2r v_r+v_{rr}$$</span> Ansatz: <span class="math-container">$v(r,t)=R(r)T(t)$</span>, then separation of variables yields the ODE solutions, with <span class="math-container">$-m^2$</span> a separation constant: <span class="math-container">$$T(t)=c_3\exp(-m^2 D t)$$</span> <span class="math-container">$$R(r)=c_4\frac{\sin mr}{r}$$</span> BCs: <span class="math-container">$$R(0)=0$$</span> <span class="math-container">$$R'(\bar{R})=0$$</span> <span class="math-container">$$R'=c_4\frac{mr\cos mr-\sin mr}{r^2}$$</span> <span class="math-container">$$R'(\bar{R})=c_4\frac{m\bar{R}\cos m\bar{R}-\sin m\bar{R}}{\bar{R}^2}=0$$</span> The eigenvalues <span class="math-container">$m_i$</span> are the solutions to the transcendental equation: <span class="math-container">$$m_i\bar{R}=\tan m_i\bar{R}$$</span> So we have: <span class="math-container">$$v(r,t)=\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span> Determine the <span class="math-container">$A_i$</span> the usual way with the IC and the Fourier series.</p> <p>So we have:</p> <p><span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span></p>
Henno Brandsma
4,280
<p>The closed sets are all sets that are at most countable, or <span class="math-container">$\Bbb R$</span>. So the <em>only</em> closed superset of an uncountable set <span class="math-container">$A$</span> is <span class="math-container">$\Bbb R$</span> so</p> <p><span class="math-container">$$A \text{ uncountable } \implies \overline{A}=\Bbb R$$</span></p> <p>in this topology....</p>
4,393,193
<p>I am looking at the function <span class="math-container">$$f(x) = \begin{cases} \dfrac{x^2-1}{x^2-x} &amp; x \ne 0,1\\ 0 &amp; x=0\\ 2 &amp;x=1 \end{cases}$$</span> and am trying to show that <span class="math-container">$\lim_{x \to 0} f(x)$</span> DNE. This makes sense to me because <span class="math-container">$f$</span> goes towards <span class="math-container">$+\infty$</span> from the left and <span class="math-container">$-\infty$</span> from the right. However, in my analysis class we do not have a definition for when a limit does not exist. My first instinct was to negate the definition of when a limit exists, but this means that the limit would not exist for any number I chose, and I want to prove that the limit doesn't exist for any numbers.</p> <p>I don't understand why there needs to be cases for this type of problem as detailed <a href="https://math.libretexts.org/Courses/Monroe_Community_College/MTH_210_Calculus_I_(Professor_Dean)/Chapter_2_Limits/2.7%3A_The_Precise_Definition_of_a_Limit" rel="nofollow noreferrer">here</a>.</p> <p>How can I go about proving this rigorously?</p>
Lubin
17,760
<p>Another general equation is <span class="math-container">$$ Ax^2+Bxy+Cy^2+Dx+Ey+F=0\,, $$</span> in which the graph is an ellipse if <span class="math-container">$B^2-4AC&lt;0$</span> and if the graph has at least two points (could be empty or a singleton, as you see in the cases <span class="math-container">$x^2+y^2+F=0$</span> when <span class="math-container">$F$</span> is positive or zero, respectively.</p>
2,363,733
<p>I saw this notation $V= V_1\otimes V_2$ in a survey on universal algebra, where $V$ was a variety, but the survey in question didn't define this notation. Could anyone explain what it means ?</p>
Keith Kearnes
310,334
<p>The tensor product notation, $V_1\otimes V_2$, for some kind of product of varieties is used in (at least) two different ways.</p> <p><strong>Way 1.</strong></p> <p>For the Kronecker product, or tensor product. ($V_1\otimes V_2$ is the variety of $V_1$ models in $V_2$, and conversely.) You can find it used this way here</p> <p><em>Freyd, P. Algebra valued functors in general and tensor products in particular. Colloq. Math. 14 1966 89-106.</em> </p> <p>in the language of algebraic theories, and here</p> <p><em>Neumann, Walter D. Malʹcev conditions, spectra and Kronecker product. J. Austral. Math. Soc. Ser. A 25 (1978), no. 1, 103-117.</em> </p> <p>in the language of varieties. The notation $V_1\otimes V_2$ is still used to denote the tensor product of varieties.</p> <p><strong>Way 2.</strong></p> <p>$V_1\otimes V_2$ has been used to denote the categorical product in the category of varieties and clone morphisms. This product of the varieties $V_1$ and $V_2$ is the variety whose clone is the product of the clone of $V_1$ with the clone of $V_2$. I have seen it denoted by $V_1\otimes V_2$, or by $V_1\widehat{\times}V_2$, or by $V_1\times V_2$ in, for example, </p> <p><em>García, O. C.; Taylor, W. The lattice of interpretability types of varieties. Mem. Amer. Math. Soc. 50 (1984), no. 305, v+125.</em></p> <p><em>McKenzie, Ralph A new product of algebras and a type reduction theorem. Algebra Universalis 18 (1984), no. 1, 29-69.</em> </p> <p><em>Grätzer, G.; Lakser, H.; Płonka, J. Joins and direct products of equational classes. Canad. Math. Bull. 12 1969 741-744.</em> </p> <p>Fortunately, the use of $V_1\otimes V_2$ in the sense of Way 2 seems to be dying out, and instead $V_1\times V_2$ is being used for the categorical product.</p>
3,965,834
<p>Does this sum converge or diverge?</p> <p><span class="math-container">$$ \sum_{n=0}^{\infty}\frac{\sin(n)\cdot(n^2+3)}{2^n} $$</span></p> <p>To solve this I would use <span class="math-container">$$ \sin(z) = \sum \limits_{n=0}^{\infty}(-1)^n\frac{z^{2n+1}}{(2n+1)!} $$</span></p> <p>and make it to <span class="math-container">$$\sum \limits_{n=0}^{\infty}\sin(n)\cdot\frac{(n^2+3)}{2^n} = \sum \limits_{n=0}^{\infty}(-1)^n\frac{n^{2n+1}}{(2n+1)!} \cdot \sum \limits_{n=0}^{\infty}\frac{(n^2+3)}{2^n} $$</span></p> <p>and since <span class="math-container">$$\sum \limits_{n=0}^{\infty}\frac{(n^2+3)}{2^n} \text{ and } \sum \limits_{n=0}^{\infty}(-1)^n\frac{n^{2n+1}}{(2n+1)!} $$</span></p> <p>converges <span class="math-container">$$ \sum \limits_{n=0}^{\infty}\frac{\sin(n)\cdot(n^2+3)}{2^n} $$</span> would also converge.</p> <p>Is my assumption true? I'm also a bit scared to use it since I've got the sin(z) equation from a source outside the stuff that my professor gave us</p>
José Carlos Santos
446,262
<p>By the same argument, since both series<span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^n}{\sqrt n}\quad\text{and}\quad\sum_{n=1}^\infty\frac{(-1)^n}{\sqrt n}$$</span>are convergent (yes, they're equal), then the series<span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^n}{\sqrt n}\times\frac{(-1)^n}{\sqrt n}\left(=\sum_{n=1}^\infty\frac1n\right)$$</span>converges too. But it doesn't, right?</p> <p>You can prove that your series converges using the comparison test or Dirichlet's test:</p> <ul> <li>the series <span class="math-container">$\sum_{n=1}^\infty\sin(n)$</span> has bounded partial sums;</li> <li>the sequence <span class="math-container">$\left(\frac{n^2+3}{2^n}\right)_{n\in\Bbb N}$</span> is monotonic;</li> <li><span class="math-container">$\lim_{n\to\infty}\frac{n^2+3}{2^n}=0$</span>.</li> </ul>
4,201,477
<blockquote> <p>Integrate <span class="math-container">$$\int \frac{\cos 2x}{(\sin x+\cos x)^2}\mathrm dx$$</span></p> </blockquote> <p>I was integrating my own way.</p> <p><span class="math-container">$$\int \frac{\cos 2x}{\sin^2x+2\sin x\cos x+cos^2}\mathrm dx$$</span> <span class="math-container">$$\int \cot 2x \mathrm dx$$</span> <span class="math-container">$$\frac{1}{2}\ln |\sin2x|+c$$</span></p> <p>I guess I didn't do any mistake. But, my book had derived something else.</p> <p><span class="math-container">$$\int \frac{\cos 2x \mathrm dx}{1+\sin 2x}$$</span> By taking <span class="math-container">$1+\sin 2x=z$</span> By differentiating the value, <span class="math-container">$\cos 2x \mathrm dx=\frac{1}{2}\mathrm dz$</span> Continue to main equation, <span class="math-container">$$\frac{1}{2}\int \frac{1}{z}\mathrm dz$$</span> <span class="math-container">$$\frac{1}{2}ln|z|+c=\frac{1}{2}\ln |1+\sin 2x|+c$$</span></p> <p>Why both answers are different? I don't think it's possible to derive one to another.</p>
xxxx036
850,363
<p><span class="math-container">$\sin^2x+\cos^2x=1\neq0$</span>, there's a mistake in the first solution.</p>
31,158
<p>To generate 3D mesh <a href="http://reference.wolfram.com/mathematica/TetGenLink/tutorial/UsingTetGenLink.html#167310445" rel="nofollow noreferrer">TetGen</a> can be easily used. Are there similar functions (or a way to use TetGen) to generate 2d mesh? I know that such functionality can be <a href="https://mathematica.stackexchange.com/questions/22244/creating-a-2d-meshing-algorithm-in-mathematica">easily implemented</a> but I would like to use a Mathematica provided function, as I need to experiment with number of nodes in elements and so on. I just want to solve PDE using FEM not really to play around with mesh generation.</p>
dwa
136
<p>Another approach is to use Imtek's package. These deal with both 2 and 3D with interfaces to Shewchuck's triangle and Tetgen respectively.</p> <p>Imtek can be had from <a href="http://portal.uni-freiburg.de/imteksimulation/downloads/ims" rel="nofollow">the University of Freiburg</a>. Documentation is extensive.</p>
1,130,487
<p>Jessica is playing a game where there are 4 blue markers and 6 red markers in a box. She is going to pick 3 markers without replacement. If she picks all 3 red markers, she will win a total of 500 dollars. If the first marker she picks is red but not all 3 markers are red, she will win a total of 100 dollars. Under any other outcome, she will win 0 dollars. </p> <p><strong>Solution</strong> The probability of Jessica picking 3 consecutive red markers is: $\left(\frac16\right)$</p> <p>The probability of Jessica's first marker being red, but not picking 3 consecutive red markers is:<br/>$\left(\frac35\right)-\left(\frac16\right)=\left(\frac{13}{30}\right)$ <br/> So i am bit stuck here<br/></p> <p><strong>what i think</strong> is it shouldn't be that complex it should be as simple as chance of Jessica's first marker being red=chance of getting red 1 time i.e P(First marker being red)=$\left(\frac{6}{10}\right)$ can any explain me the probability of Jessica's first marker being red=$\left(\frac{13}{30}\right)$?</p>
vikas meena
210,698
<p>Just add a constant term and also notice that $\ln2$ is also a constant.</p>
2,825,522
<p>I have this problem:</p> <p><a href="https://i.stack.imgur.com/blD6N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blD6N.png" alt="enter image description here"></a></p> <p>I have not managed to solve the exercise, but this is my breakthrough:</p> <p><a href="https://i.stack.imgur.com/0dTdO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dTdO.jpg" alt="enter image description here"></a></p> <p>How can I continue to find it?</p>
Kenny Lau
328,173
<p>$$x = 360^\circ - 90^\circ - 120^\circ - 60^\circ = 90^\circ$$</p> <p><img src="https://i.stack.imgur.com/xyTFx.png" alt=""></p>
2,268,345
<p>Find the value of $$S=\sum_{n=1}^{\infty}\left(\frac{2}{n}-\frac{4}{2n+1}\right)$$ </p> <p>My Try:we have</p> <p>$$S=2\sum_{n=1}^{\infty}\left(\frac{1}{n}-\frac{2}{2n+1}\right)$$ </p> <p>$$S=2\left(1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+\cdots\right)$$ so</p> <p>$$S=2\left(1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\cdots\right)$$ But we know</p> <p>$$\ln2=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots$$ So</p> <p>$$S=2(2-\ln 2)$$</p> <p>Is this correct?</p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} S &amp; \equiv \sum_{n = 1}^{\infty}\pars{{2 \over n} - {4 \over 2n + 1}} = \sum_{n = 1}^{\infty}\pars{{4 \over 2n} - {4 \over 2n + 1}} = 4\sum_{n = 2}^{\infty}{\pars{-1}^{n} \over n} = 4\bracks{1 + \sum_{n = 1}^{\infty}{\pars{-1}^{n} \over n}} \\[5mm] &amp; = 4\pars{\vphantom{\LARGE A}1 + \braces{\vphantom{\Large A}-\ln\pars{\vphantom{\large A}1 - \bracks{-1}}}} = \bbx{4\bracks{\vphantom{\large A}1 - \ln\pars{2}}} \end{align}</p> <blockquote> <p>Indeed, it's quite close to <a href="https://math.stackexchange.com/a/2268350/85343">$\texttt{@Simply Beautiful Art}$ fine answer</a>.</p> </blockquote>
269,655
<p>I am trying to find a nonlinear model from the data.</p> <p><a href="https://i.stack.imgur.com/W6JEI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W6JEI.png" alt="enter image description here" /></a></p> <p>My code is below:</p> <pre><code>data = {{0.0, 0.0}, {0.05, 0.87}, {0.1, 0.99}, {0.15, 0.98}, {0.2, 0.91}, {0.25, 0.81}, {0.3, 0.71}, {0.35, 0.62}, {0.4, 0.51}, {0.45, 0.31}, {0.5, 0.31}, {0.55, 0.23}, {0.6, 0.18}, {0.65, 0.14}, {0.7, 0.08}, {0.75, 0.05}, {0.8, 0.03}, {0.85, 0.02}, {0.9, 0.01}, {0.95, 0.002}, {1, 0}}; model=((1 - x)/(1 - a))^((0.5 (1 - a))/ a) (x/a)^0.5; (* fit model*) NonlinearModelFit[data, model, a, x] </code></pre> <p><strong>NonlinearModelFit</strong> doesn't work for this model, i.e.</p> <p><a href="https://i.stack.imgur.com/lNsMy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lNsMy.png" alt="enter image description here" /></a></p> <p>Are there any other ways to solve this problem?</p> <p>Thanks in advance!</p> <p>Update:</p> <p>If I try:</p> <pre><code>NonlinearModelFit[data, {model, {a &gt; 0.000001}}, a, x] </code></pre> <p>Errors:</p> <p><a href="https://i.stack.imgur.com/SJLzN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SJLzN.png" alt="enter image description here" /></a></p>
Alex Trounev
58,388
<p>We can use <code>NMimimize</code> to solve this problem as follows</p> <pre><code>data = {{0.0, 0.0}, {0.05, 0.87}, {0.1, 0.99}, {0.2, 0.98}, {0.2, 0.91}, {0.25, 0.81}, {0.3, 0.71}, {0.35, 0.62}, {0.4, 0.51}, {0.45, 0.31}, {0.5, 0.31}, {0.55, 0.23}, {0.6, 0.18}, {0.65, 0.14}, {0.7, 0.08}, {0.75, 0.05}, {0.8, 0.03}, {0.85, 0.02}, {0.9, 0.01}, {0.95, 0.002}, {1, 0}}; f[a_, x_] := ((1 - x)/(1 - a))^((0.5 (1 - a))/a) (x/a)^0.5; vec[a_] = Table[data[[i, 2]] - f[a, data[[i, 1]]], {i, Length[data]}]; sol = NMinimize[{vec[a] . vec[a], 0 &lt; a &lt; 1}, {a}] (*Out[]= {0.0152585, {a -&gt; 0.127671}}*) </code></pre> <p>Visualization</p> <pre><code>Show[Plot[f[a, x] /. sol[[2]], {x, 0, 1}], ListPlot[data, PlotStyle -&gt; Red]] </code></pre> <p><a href="https://i.stack.imgur.com/Gp2TN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Gp2TN.png" alt="Figure 1" /></a></p>
269,655
<p>I am trying to find a nonlinear model from the data.</p> <p><a href="https://i.stack.imgur.com/W6JEI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W6JEI.png" alt="enter image description here" /></a></p> <p>My code is below:</p> <pre><code>data = {{0.0, 0.0}, {0.05, 0.87}, {0.1, 0.99}, {0.15, 0.98}, {0.2, 0.91}, {0.25, 0.81}, {0.3, 0.71}, {0.35, 0.62}, {0.4, 0.51}, {0.45, 0.31}, {0.5, 0.31}, {0.55, 0.23}, {0.6, 0.18}, {0.65, 0.14}, {0.7, 0.08}, {0.75, 0.05}, {0.8, 0.03}, {0.85, 0.02}, {0.9, 0.01}, {0.95, 0.002}, {1, 0}}; model=((1 - x)/(1 - a))^((0.5 (1 - a))/ a) (x/a)^0.5; (* fit model*) NonlinearModelFit[data, model, a, x] </code></pre> <p><strong>NonlinearModelFit</strong> doesn't work for this model, i.e.</p> <p><a href="https://i.stack.imgur.com/lNsMy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lNsMy.png" alt="enter image description here" /></a></p> <p>Are there any other ways to solve this problem?</p> <p>Thanks in advance!</p> <p>Update:</p> <p>If I try:</p> <pre><code>NonlinearModelFit[data, {model, {a &gt; 0.000001}}, a, x] </code></pre> <p>Errors:</p> <p><a href="https://i.stack.imgur.com/SJLzN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SJLzN.png" alt="enter image description here" /></a></p>
xzczd
1,871
<pre><code>nlm = NonlinearModelFit[data, {model, 0 &lt; a &lt; 1}, a, x, Method -&gt; NMinimize] Plot[nlm[x], {x, 0, 1}]~Show~ListPlot@data </code></pre> <p><a href="https://i.stack.imgur.com/6GOzX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6GOzX.png" alt="enter image description here" /></a></p>
3,982,937
<p>To avoid typos, please see my screen captures below, and the red underline. The question says <span class="math-container">$h \rightarrow 0$</span>, thus why <span class="math-container">$|h|$</span> in the solution? Mustn't that <span class="math-container">$|h|$</span> be <span class="math-container">$h$</span>?</p> <p><img src="https://i.stack.imgur.com/rweRh.jpg" alt="enter image description here" /></p> <p>Spivak, <em>Calculus</em> 2008 4 edn. <a href="https://mathpop.com/" rel="nofollow noreferrer">His website's errata</a> lists no errata for these pages.</p>
Ethan Bolker
72,858
<p>Writing <span class="math-container">$$ 0 &lt; |h| &lt; \delta $$</span> is easier than writing <span class="math-container">$$ -\delta &lt; h &lt; \delta \text{ and } h \ne 0 . $$</span></p>
2,005,604
<p>Showing $\sqrt a + $$\sqrt {\cos(\sin a)} = 2$</p> <p>I've attempted various manipulations (multiplying by one, squaring, etc.) but cannot find a way to solve for a. Anyone have an idea how I can approach this problem? Thanks. </p>
Simply Beautiful Art
272,831
<p>It isn't actually possible to solve for $a$, but we can do some simple fixed-point iteration:</p> <p>$$\sqrt a=2-\sqrt{\cos(\sin a)}\implies a=\left(2-\sqrt{\cos(\sin a)}\right)^2$$</p> <p>We rewrite this as</p> <p>$$a_{n+1}=\left(2-\sqrt{\cos(\sin a_n)}\right)^2$$</p> <p>And start off with a guess $a_0=1$.</p> <p>$$a_1=\left(2-\sqrt{\cos(\sin a_0)}\right)^2=1.401115158$$</p> <p>$$a_2=\left(2-\sqrt{\cos(\sin a_1)}\right)^2=1.579572306$$</p> <p>$$a_3=\left(2-\sqrt{\cos(\sin a_2)}\right)^2=1.600036196$$</p> <p>$$a_4=\left(2-\sqrt{\cos(\sin a_3)}\right)^2=1.599473216$$</p> <p>And this sequence will approach the solution. I would personally use this method if you lack an understanding of derivatives. If you did have an understanding of derivatives, Newton's method would most likely work much better.</p>
605,277
<p>I have an electronics project where I sample two sine waves. I would like to know what the amplitude (peak) and difference in phase is. Actually I just need to know the average product of the two waves.</p> <p>A caveat I have is that the two sine waves have been rectified. (negatives cut off) Here is what I expect the samples to look like:</p> <p><img src="https://i.stack.imgur.com/oUnLb.png" alt="Samples of two rectified sine waves out of phase"></p> <p>I don't have much experience with signal processing. Can you recommend any reading or topics to research?</p>
cactus314
4,997
<p>You don't even know the frequency, because of <a href="https://en.wikipedia.org/wiki/Aliasing" rel="nofollow noreferrer">aliasing</a>. </p> <p>Here a sine wave gets undersampled two different ways. </p> <p>The red curve could be made a perfect fit with <a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform" rel="nofollow noreferrer">Fast Fourier Transform</a>.</p> <p><a href="https://i.stack.imgur.com/80Yg8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/80Yg8.png" alt="enter image description here"></a></p> <hr> <p>I was worried I couldn't reproduce the sine wave with the long stretch of zeros, but <a href="https://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow noreferrer">discrete Fourier transform</a> to the rescue!</p> <p><a href="https://i.stack.imgur.com/fonX4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fonX4.png" alt="enter image description here"></a></p>
2,134,167
<p>So we finished studying chapter 5 of Rudin on differentiation (Mean value theorem, Taylor's theorem etc) and this was given as a homework problem:</p> <p>Let $ f(x) $ be continuously differentiable on $ [0, \infty) $ such that $ f $ satisfies $ f'(x) = \cos(x^2)f(x) $ for all $ x \geq 0 $, with $ f(0) = 1 $. Prove that $ e^{-x} \leq f(x) \leq e^x $ for all $ x \geq 0 $.</p> <p>Clearly, $ x = 0$ then the result is trivial. I tried to use Taylor's theorem to note that if $ x &gt; 0 $, then there exists $ x_1 \in (0,x) $ such that $ f(x) = 1 + xf'(x_1) = 1 + x \cos(x_1^2)f(x_1) $. This is where I'm stuck, since I don't know what to do with the cosine function. Any hint/help/comment is greatly appreciated.</p>
Tsemo Aristide
280,301
<p>I assume $f\geq 0$ Hint: write $h(x)=e^{-x}f(x), h'(x)=e^{-x}f(x)(cos(x^2)-1)$</p> <p>and $l(x)=e^{x}f(x)$, if $f\geq 0$, $h$ decreases and $l$ increases, so $h(x)\leq h(0)=1$ and $l(x)\geq l(0)=1$.</p>
2,697,729
<p>Suppose that the probability that you will drop a penny on the ground is 1/5, and the probability that you will find a penny on the ground today is 1/4. If the two events are independent, what is the probability that at least one of the two events will occur?</p> <p>First I tried simply $1/5+1/4$ which was incorrect. Then I tried $( 1/5 \cdot 100)+ (1/4 \cdot 100)$, which was was incorrect as well. The correct answer is $2/5$ or $40\%$. </p>
Bensstats
286,966
<p><strong>Note:</strong> $$P(\text{drop or find penny})=P(\text{drop a penny})+P(\text{find a penny})-P(\text{drop and find a penny})$$</p> <p>$$\iff P(D\cup F)=P(D)+P(F)-P(D\cap F)$$ Thus, $$=\frac{1}{4}+\frac{1}{5}-\frac{1}{4}*\frac{1}{5}=\frac{2}{5}$$</p> <p>Hope this is helpful!</p>
4,400,261
<p>If <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are <a href="https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Standard_normal_random_vector" rel="nofollow noreferrer">bivariate normal PDFs</a> having correlation coefficients <span class="math-container">$ρ_f$</span> and <span class="math-container">$ρ_g$</span> respectively, what is the correlation coefficient of the bivariate normal distribution <span class="math-container">$h=f*g$</span>, where <span class="math-container">$*$</span> denotes the convolution operator? I've tried searching for the answer but come up dry.</p>
Community
-1
<p>Suppose <span class="math-container">$X \sim \mathcal{N}(\mu_1, \Sigma_1)$</span> has pdf <span class="math-container">$f$</span> and <span class="math-container">$Y \sim \mathcal{N}(\mu_2, \Sigma_2)$</span> has pdf <span class="math-container">$g$</span>. Then if <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent, <span class="math-container">$X + Y \sim \mathcal{N}(\mu_1 + \mu_2, \Sigma_1 + \Sigma_2)$</span> has pdf <span class="math-container">$h$</span>.</p> <p>See <a href="https://math.stackexchange.com/questions/1471656/help-for-convolution-of-two-multivariate-gaussian-pdfs">Help for convolution of two Multivariate Gaussian PDFs</a></p>
16,749
<p>I wanted to remove the <code>Ticks</code> in my coding but i can't. Here when i try to remove the <code>Ticks</code> the number also gone. I need numbers without <code>Ticks</code>, <code>Ticks</code> and <code>GridLines</code> should be automatic and don't use<code>PlotRange</code> .</p> <pre><code>BarChart[{{1, 2, 3}, {4, 5, 6}}, ImageSize -&gt; 400, BarOrigin -&gt; Left, ChartLayout -&gt; "Stacked", ImageSize -&gt; {500, 300}, GridLines -&gt; {Automatic, None}, Ticks -&gt; {{Automatic}, None} , LabelStyle -&gt; Directive[Opacity[1]], TicksStyle -&gt; Directive[Opacity[.3]] , Axes -&gt; {True, False}, AxesStyle -&gt; Opacity[.0], ChartStyle -&gt; {RGBColor[.06, .29, .66], RGBColor[.01, .56, .61], RGBColor[1, .58, 0]}, ChartBaseStyle -&gt; EdgeForm[GrayLevel[.6]]] </code></pre> <p><img src="https://i.stack.imgur.com/s28b2.png" alt="enter image description here"></p>
Chris Degnen
363
<p>Due to <code>AbsoluteOptions</code> reporting <code>Ticks</code> and <code>GridLines</code> in an unuseable fashion, I've had to resort to <code>Rasterizing</code> to find out how many grid lines are automatically being produced.</p> <pre><code>data = {{1, 2, 3}, {4, 5, 6}}; bc = BarChart[data, ImageSize -&gt; 400, BarOrigin -&gt; Left, ChartLayout -&gt; "Stacked", ImageSize -&gt; {500, 300}, GridLines -&gt; {Automatic, None}, Ticks -&gt; {{Automatic}, None}, LabelStyle -&gt; Directive[Bold, Opacity[1], Thick, Black, FontFamily -&gt; "Helvetica"], TicksStyle -&gt; Directive[20, Opacity[.3]], Axes -&gt; {True, False}, AxesStyle -&gt; Opacity[.0], ChartStyle -&gt; {RGBColor[.06, .29, .66], RGBColor[.01, .56, .61], RGBColor[1, .58, 0]}, ChartBaseStyle -&gt; EdgeForm[GrayLevel[.6]]]; rbc = Rasterize[bc]; lines = Min[Last /@ Tally[rbc[[1, 1, -1]](* Top row of pixels *)]]; max = Max[Total /@ data]; interval = Quotient[max, (lines - 1)]; maxline = (lines - 1)*interval; ticks = {Table[{i,(* Frame for padding *)Framed[ToString[i], FrameStyle -&gt; None], 0}, {i, 0, maxline, interval}], None}; BarChart[data, ImageSize -&gt; 400, BarOrigin -&gt; Left, ChartLayout -&gt; "Stacked", ImageSize -&gt; {500, 300}, GridLines -&gt; {Automatic, None}, Ticks -&gt; ticks, LabelStyle -&gt; Directive[Bold, Opacity[1], Thick, Black, FontFamily -&gt; "Helvetica"], TicksStyle -&gt; Directive[20, Opacity[.3]], Axes -&gt; {True, False}, AxesStyle -&gt; Opacity[.0], ChartStyle -&gt; {RGBColor[.06, .29, .66], RGBColor[.01, .56, .61], RGBColor[1, .58, 0]}, ChartBaseStyle -&gt; EdgeForm[GrayLevel[.6]]] </code></pre> <p><img src="https://i.stack.imgur.com/uHSGl.png" alt="enter image description here"></p>
131,322
<p>A knot in S^3 is small if its complement does not contain a closed incompressible surface. Is it a generic property for knots, meaning that among all knots with less than $n$ crossings, the proportion of small knots goes to 1 when $n$ goes to infinity?</p>
Robert Bruner
6,872
<p>There are several issues to address here, but let me first point out that the formula for $B_n$ at the top of p. 71 of our Memoir has a typo: it should say $B_n = (Z/p^{j+1})^s \oplus (Z/p^j)^{p^2-p-s} $ where $0 &lt; s \leq p^2-p$ and $n = 2j(p^2-p)+2s+2p-3$. The point is that there should be $(p^2-1) - (p-1) = p^2-p$ summands: there are $p^2-1$ in all, and $A_n$ has already accounted for $p-1$ of them.</p> <p>Second, the image of $Z[x]/(1+x+x^2+x^3)$ in $Z[x]/(1+x) \times Z[x]/(1+x^2)$ has index 2 under the map you describe: $a+bx+cx^2$ maps to $(a-b+c, a-c+bx)$, and $a-b+c \equiv a-c+b$ mod 2. The Chinese Remainder theorem is a theorem about PIDs and $Z[x]$ is not a PID, so a bit more care is needed.</p> <p>Third, the groups $ku_{2i-1}BC_4$ are (writing a+b for $Z/a \oplus Z/b$, etc.): 4, 8+2, 16+2+2, 32+4+2, 64+4+4, etc. For $BC_9$ you'd get 9, 9+9, 27+9+3, 27+27+3+3, 81+27+3+3+3, 81+81+3+3+3+3, 243+81+3+3+3+3+3, 243+243+3+3+3+3+3+3, 729+243+9+3+...+3, etc. The formulas for A_n and B_n on p.71 of the Memoir are just saying this in general.</p> <p>Fourth, as a sanity check, $ku_1BC_n = Z[x]/(1+x+\cdots+x^{n-1},1-x)$ does give $ku_1BC_n = Z/n$.</p> <p>Fifth, I don't recall how I worked this out (it has been 10 years!) but it is an instance of an interesting general question about Smith Normal Form: what invariants of an integer matrix are needed to predict the Smith Normal Form of powers of that matrix? (Any PID will do here, not just the integers.) The work on Horn's inequalities tells us the possible values of SNF(AB) in terms of SNF(A) and SNF(B). The closer det(A) and det(B) are to being relatively prime, the narrower the range of possibilities, until SNF(AB) = SNF(A)SNF(B) in the case where they are relatively prime. (This is a triviality: an extension whose subgroup and quotient group have relatively prime orders must split.) With SNF(A^i), we are at the opposite extreme. Nonetheless, someone who understands the work on Horn's inequalities might be able to give a precise answer. It'd be interesting to see.</p>
1,490,219
<blockquote> <p>Given $S=\displaystyle \bigcap^{\infty}_{k=1}\left(1-\frac{1}{k}, 1+\frac{1}{k}\right)$, what is $\sup(S)$ and $\max(S)$?</p> </blockquote> <p>I reasoned that this is empty, since as $k$ goes to infinity, then $\frac{1}{k}$ goes to $0$. So ultimately, the intersection of all the intervals in $S$ is $(1,1)$, which should be empty. But the solution the supremum and maximum does exist and they are $1$. Why is this?</p>
Community
-1
<p>It is easy to observe that $S=\{1\}.$ So $\inf S =\sup S =1.$</p>
2,957,315
<p><span class="math-container">$j_{1,1}$</span> denotes the first zero of the first Bessel function of the first kind. (That's a lot of firsts!) It's approximately equal to <span class="math-container">$3.83$</span>. My question is, is there any closed form expression for its value? Even a infinite series or infinite product that yields it would be good.</p> <p>I ask because this value is used in physics, in the context of diffraction of light through a circular aperture, and students often make the mistake of thinking that the number just pops out of nowhere.</p>
mathstackuser12
361,383
<p>I like Claude's approach and approximation. However for variety (and possibly interest) here is another approach which i pinched from Watson who pinched from Rayleigh who pinched from Euler. And while Claude's approximation converges from above, here we'll approach from below. First we can write the Bessel function as an infinite product with the form <span class="math-container">$${{J}_{1}}\left( z \right)=\frac{1}{2}z\prod\limits_{n=0}^{\infty }{\left( 1-\frac{{{z}^{2}}}{j_{1,n}^{2}} \right)}$$</span> Taking the logarithmic derivative, we have then <span class="math-container">$$\frac{1}{{{J}_{1}}\left( z \right)}{{J}_{1}}'\left( z \right)=\frac{1}{z}-2z\sum\limits_{n=0}^{\infty }{\frac{1}{j_{1,n}^{2}}\frac{1}{1-{{z}^{2}}/j_{1,n}^{2}}}$$</span> Converting this into a series we find <span class="math-container">$$\frac{1}{{{J}_{1}}\left( z \right)}{{J}_{1}}'\left( z \right)=\frac{1}{z}-2z\sum\limits_{n=0}^{\infty }{\frac{1}{j_{1,n}^{2}}\sum\limits_{m=0}^{\infty }{\frac{{{z}^{2m}}}{j_{1,n}^{2m}}}}$$</span> And so we recover the definition of Rayleigh’s function <span class="math-container">$$\frac{1}{{{J}_{1}}\left( z \right)}{{J}_{1}}'\left( z \right)=\frac{1}{z}-2z\sum\limits_{m=0}^{\infty }{{{z}^{2m}}{{\sigma }_{2m+2}}}$$</span> where</p> <p><span class="math-container">$${{\sigma }_{2m+2}}=\sum\limits_{n=0}^{\infty }{\frac{1}{j_{1,n}^{2m+2}}}$$</span> Using well known recurrence relationships we can simplify this to <span class="math-container">$${{J}_{2}}\left( z \right)=2{{J}_{1}}\left( z \right)\sum\limits_{m=0}^{\infty }{{{z}^{2m+1}}{{\sigma }_{2m+2}}}$$</span> Substituting the series representation for the Bessel functions we have <span class="math-container">$$\sum\limits_{k=0}^{\infty }{\frac{{{\left( -1 \right)}^{k}}{{z}^{2k}}}{{{2}^{2k+2}}k!\left( k+2 \right)!}}=\sum\limits_{n=0}^{\infty }{\frac{{{\left( -1 \right)}^{n}}}{{{2}^{2n}}n!\left( n+1 \right)!}}\sum\limits_{m=0}^{\infty }{{{\sigma }_{2m+2}}{{z}^{2m+2n}}}$$</span> And reversing the summation, i.e. <span class="math-container">$\sum\limits_{k=0}^{\infty }{\sum\limits_{j=0}^{\infty }{{{a}_{k,j}}}}=\sum\limits_{j=0}^{\infty }{\sum\limits_{k=0}^{j}{{{a}_{k,j-k}}}}$</span> one finds <span class="math-container">$$\sum\limits_{k=0}^{\infty }{\frac{{{\left( -1 \right)}^{k}}{{z}^{2k}}}{{{2}^{2k+2}}k!\left( k+2 \right)!}}=\sum\limits_{m=0}^{\infty }{{{z}^{2m}}}\sum\limits_{n=0}^{m}{\frac{{{\left( -1 \right)}^{n}}}{{{2}^{2n}}n!\left( n+1 \right)!}{{\sigma }_{2m-2n+2}}}$$</span> Equating coefficients <span class="math-container">$$\sum\limits_{n=0}^{k}{\frac{{{\left( -1 \right)}^{n}}{{\sigma }_{2k-2n+2}}}{{{2}^{2n}}n!\left( n+1 \right)!}}=\frac{{{\left( -1 \right)}^{k}}}{{{2}^{2k+2}}k!\left( k+2 \right)!}$$</span> It’s very easy to create the sigma numbers in mathematica (and even by hand I suppose), so explicitly we have <span class="math-container">$${{\sigma }_{2}}=\frac{1}{8}, {{\sigma }_{4}}=\frac{1}{192}, {{\sigma }_{6}}=\frac{1}{3072}, {{\sigma }_{8}}=\frac{1}{46080},…,{{\sigma }_{20}}=\frac{777013}{361634098839552000}$$</span> Now since <span class="math-container">$0&lt;{{j}_{1,1}}&lt;{{j}_{1,2}}&lt;{{j}_{1,3}}...$</span> then <span class="math-container">${{\sigma }_{M}}=\sum\limits_{n=0}^{\infty }{\frac{1}{j_{1,n}^{M}}}&gt;\frac{1}{j_{1,1}^{M}}$</span>and so <span class="math-container">$j_{1,1}^{{}}&gt;{{\left( {{\sigma }_{M}} \right)}^{-1/M}}$</span>. You can bound this the other way as well (see Watson) but what we have is enough to get good approximations. So for a somewhat reasonable example (about 2 decimal places - when rounded)</p> <p><span class="math-container">$${{j}_{1,1}}\simeq {{2}^{5/4}}{{3}^{1/4}}{{5}^{1/8}}$$</span> </p> <p>a more unreasonable example, that yields about 6 places, <span class="math-container">$${{j}_{1,1}}\simeq 2{{\left( \frac{1}{777013} \right)}^{1/20}}{{2}^{7/10}}{{3}^{7/20}}{{5}^{3/20}}{{7}^{1/20}}{{11}^{1/20}}$$</span> </p>
4,413,641
<p>I'm trying to understand a proof that given a vector space <span class="math-container">$V$</span> over the field <span class="math-container">$F$</span> and <span class="math-container">$n$</span> vectors <span class="math-container">$v_1, \ldots, v_n$</span>, <span class="math-container">$\mathrm{span}(v_1, \ldots, v_n)$</span> is the smallest subspace containing them.</p> <p>I'm fine with most of the proof, including how it is proved in Axler, but the lecture notes I'm working through include a comment I don't fully understand.</p> <p>I'm ok with showing that <span class="math-container">$\mathrm{span}(v_1, \ldots, v_n)$</span> is a subspace, that it contains <span class="math-container">$v_1, \ldots, v_n$</span>, and that if <span class="math-container">$S$</span> is some other subspace of <span class="math-container">$V$</span> containing <span class="math-container">$v_1, \ldots, v_n$</span>, then <span class="math-container">$\mathrm{span}(v_1, \ldots, v_n) \subset S$</span>.</p> <p>The comment made in the lecture notes is: how do we know that this smallest subspace containing <span class="math-container">$v_1, \ldots, v_n$</span> actually exists? Axler does not provide a proof of this fact, suggesting that it isn't necessary because by definition, &quot;smallest&quot; means that any other subspace containing those vectors contains their span as a subset.</p> <p>Is this something I need to show for a complete proof? If so, I cannot figure out how exactly I should go about proving this. The idea, I think, is taking the intersection of all subspaces containing <span class="math-container">$v_1, \ldots, v_n$</span>, and I know that the arbitrary intersection of subspaces is a subspace, but is it necessary to show this?</p>
Mr.Gandalf Sauron
683,801
<p>The confusion you are having is that you are checking the equivalence of one definition with itself.</p> <p>If you define <span class="math-container">$span\{v_{1},v_{2},...,v_{n}\}$</span> as the smallest subspace containing <span class="math-container">$v_{1},...v_{n}$</span> then by definition it is the intersection of all subspaces containing <span class="math-container">$v_{1},...v_{n}$</span>. You already say that you can prove that arbitrary intersection of subspaces is a subspace and hence it follows by definition that any subspace containing <span class="math-container">$v_{i}$</span> is the intersection. Hence <span class="math-container">$\displaystyle\text{span}\{v_{1},v_{2},...,v_{n}\}=\bigcap\{\text{S is a subsapce of V containing}\,v_{1},...v_{n}\}$</span>.</p> <p>Now if you define the span as the set of all linear combination of <span class="math-container">$\{v_{i}\}$</span> then you can show that this set is actually the smallest subspace containing <span class="math-container">$\{v_{1},v_{2},...,v_{n}\}$</span> and hence equals the intersection.</p> <p>That is you need to check the equivalence of two definitions :-</p> <ol> <li><span class="math-container">$\displaystyle\text{span}\{v_{1},..,v_{n}\}=\{\sum_{i=1}^{n}c_{i}v_{i}\,\,,c_{i}\in\mathbb{F}\}$</span></li> </ol> <p>2.<span class="math-container">$span\{v_{1},...v_{n}\}$</span> is the is the smallest subspace containing <span class="math-container">$v_{1},...v_{n}$</span>.</p> <p>Infact you can define this for arbitrary collection of vectors <span class="math-container">$\{v_{i}\}_{i\in I}$</span> where <span class="math-container">$I$</span> is some arbirary indexing set.</p> <p>Then you can again check the equivalence of two definitions:-</p> <ol> <li><p><span class="math-container">$\displaystyle\text{span}\{v_{i}\}_{i\in I}=\{\sum_{i\in I}c_{i}v_{i}\,\,,c_{i}\in\mathbb{F},c_{i}=0\,\text{for all but finitely many}\,i\in I\}=\{\sum_{\text{finite}}c_{i}v_{i}\,\,,c_{i}\in\mathbb{F}\}$</span>.</p> </li> <li><p><span class="math-container">$span\{v_{i}\}_{i\in I}$</span> is the smallest subspace of containing <span class="math-container">$\{v_{i}\}_{i\in I}$</span> which is equal to <span class="math-container">$\displaystyle\bigcap\{\text{S is a subsapce of V containing}\,\{v_{i}\}_{i\in I}\}$</span></p> </li> </ol> <p>So you see that if you prove <span class="math-container">$1\implies 2$</span> then it solves your &quot;existence&quot; crisis. And if you prove <span class="math-container">$2\implies 1$</span> then it means that if such a space exists then it is unique and is given by <span class="math-container">$1$</span>.</p>
3,877,652
<p>Can anyone help me how to do this, like there are some examples in my book, but this exercise problem seems to be alittle difficult for me to approach:</p> <p>Given a set <span class="math-container">$\{A_k|k\in\mathbb{N}\}$</span>: <span class="math-container">$$A_k=\bigg\{x\in\mathbb{R}\bigg|\space\space 1-\frac{1}{k}&lt;x&lt;1+\frac{1}{k}\bigg\}$$</span></p> <p>Find: <span class="math-container">$$\bigcup_{k\in \mathbb{N}} A_k,\text{and} \bigcap_{k\in \mathbb{N}} A_k$$</span></p> <p><strong>My thoughts:</strong></p> <p>I think that <span class="math-container">$\bigcup_{k\in \mathbb{N}} A_k=(0,2)$</span>, since the range of the inequality gets smaller from <span class="math-container">$(0,2)$</span> to <span class="math-container">$(0.5,1.5), (0.666, 1.3333)$</span> etc.. However I have trouble showing this in a mathematical way. Can anyone provide me with some assistance as I am having trouble with reasoning with set logic.</p> <p>Additionally for <span class="math-container">$\bigcap_{k\in \mathbb{N}} A_k$</span>, eventually as <span class="math-container">$k\rightarrow \infty$</span> then the inequaliy becomes <span class="math-container">$1&lt;x&lt;1$</span>, which does not make sense and thus the intersections must be <span class="math-container">$\bigcap_{k\in \mathbb{N}} A_k =\emptyset$</span>?</p>
Greg Martin
16,078
<p>If you want to show that <span class="math-container">$\bigcup_{k\in\Bbb N} A_k = (0,2)$</span>, then you have to prove two things:</p> <ul> <li>If <span class="math-container">$x\in \bigcup_{k\in\Bbb N} A_k$</span>, then <span class="math-container">$x\in (0,2)$</span>. In other words, if <span class="math-container">$x\in A_k$</span> for some <span class="math-container">$k\in\Bbb N$</span>, then <span class="math-container">$x\in (0,2)$</span>.</li> <li>If <span class="math-container">$x\in (0,2)$</span>, then <span class="math-container">$x\in \bigcup_{k\in\Bbb N} A_k$</span>. In other words, if <span class="math-container">$x\in (0,2)$</span>, then <span class="math-container">$x\in A_k$</span> for some <span class="math-container">$k\in\Bbb N$</span>.</li> </ul> <p>The first item should be easy, and the second item not too bad.</p> <p>A similar structure would hold for <span class="math-container">$\bigcap_{k\in\Bbb N} A_k$</span> (where each &quot;for some <span class="math-container">$k\in\Bbb N$</span>&quot; would be replaced by &quot;for all <span class="math-container">$k\in\Bbb N$</span>&quot;). But be warned: <span class="math-container">$\bigcap_{k\in\Bbb N} A_k$</span> is not empty!</p>
2,882,696
<p>$a,b,x$ are elements of a group .</p> <p>$x$ is the inverse of $a$.</p> <p>Here is my attempt to prove it :-</p> <p>$a\cdot b = e$</p> <p>$x\cdot (a\cdot b) = x\cdot e$</p> <p>$(x\cdot a)\cdot b = x$</p> <p>$e\cdot b = x$</p> <p>$b = x$</p> <p>Are my steps correct? What I wanted to prove is that if $ab = e$, then $ba = e$</p>
wjmolina
25,134
<p>Let $a,b\in G$ be such that $ab=e$, and let $x$ be the inverse of $a$. Then $ab=e=ax$, so $b=x$.</p> <p><strong>Edit</strong>: If $ab=e$, then</p> <p>$$\begin{align}ba&amp;=ba\\&amp;=bea\\&amp;=b(ab)a\\&amp;=(ba)(ba).\end{align}$$</p> <p>Let $c$ be the inverse of $ba$. Then $ba=(ba)(ba)$ implies that $c(ba)=c(ba)(ba)$, which implies that $e=ba$.</p>
2,882,696
<p>$a,b,x$ are elements of a group .</p> <p>$x$ is the inverse of $a$.</p> <p>Here is my attempt to prove it :-</p> <p>$a\cdot b = e$</p> <p>$x\cdot (a\cdot b) = x\cdot e$</p> <p>$(x\cdot a)\cdot b = x$</p> <p>$e\cdot b = x$</p> <p>$b = x$</p> <p>Are my steps correct? What I wanted to prove is that if $ab = e$, then $ba = e$</p>
Larry B.
364,722
<p>It implies that $x = b$. </p> <p>Your reasoning is sound, and it is this exact reasoning that proves a group element's inverse is unique. You also proved this using only the identity, associativity, and inverse laws. Good job!</p>
2,882,696
<p>$a,b,x$ are elements of a group .</p> <p>$x$ is the inverse of $a$.</p> <p>Here is my attempt to prove it :-</p> <p>$a\cdot b = e$</p> <p>$x\cdot (a\cdot b) = x\cdot e$</p> <p>$(x\cdot a)\cdot b = x$</p> <p>$e\cdot b = x$</p> <p>$b = x$</p> <p>Are my steps correct? What I wanted to prove is that if $ab = e$, then $ba = e$</p>
Community
-1
<p>From $a\cdot b=e$ you draw $b=a^{-1}$, and the inverse is unique.</p>
1,707,929
<p>How do I solve for the object distance to each receiver for three radar receivers on the ground, each the same distance from the other, and each receiving echoes, reflected from an object overhead, of a signal pulse from a single transmitter located on the ground at the exact center of the receivers? </p> <p>Transmitter is located at the exact center of an equlaternal triangle bounded by the the three receivers. The transmitter is t. The three receivers are a, b, c. The overhead object is o.</p> <p>The physical distance between each pair of receivers is known. (a—b, b—c, c—a ) The time it takes the signal from the transmitter to the object and echoed back to each receiver are known (t—o—a, t—o—b, t—o—c). These three times will be equivalent if the object is directly overhead the transmitter. The three times will be different if the object is at a different distance from each receiver.</p> <p>The signal from transmitter, reflected from object, and received by receivers, always travels at a constant speed 's'.</p> <p>Ground is a plane. Any environmental factor (such as air density) that affects speed of transmission pulse is a constant.</p> <p>=================================================== UPDATE:</p> <p>I implemented the solution in the answer below into 'C' to test it. The x,y,z coordinate solution for case 1 &amp; 2 looks correct. But the x coordinate is incorrectly 0 for case 3 &amp; 4. It should be &lt; 0 for case 3, and > 0 for case 4.</p> <pre><code>int main() </code></pre> <p>{ printf( "\n\n CASE 1: object directly obove TX:"); generate_3D_vector( 10, 10, 10 ); </p> <pre><code>printf( "\n\n CASE 2: object closer to RX-a:"); generate_3D_vector( 9, 10, 10 ); printf( "\n\n CASE 3: object closer to RX-b:"); generate_3D_vector( 10, 9, 10 ); printf( "\n\n CASE 4: object closer to RX-c:"); generate_3D_vector( 10, 10, 9 ); </code></pre> <p>}</p> <p>void generate_3D_vector( float ra, float rb, float rc )</p> <p>{ float x, y, z;</p> <pre><code>x = sqrt( 3.0 ) * ( rb - rc )*( 15 / 24 - 1 / 3 * ra*( ra - rb - rc ) + 2 / 3 * rb*rc ) / ( ra + rb + rc ); y = ( 1.0/2 *ra*( 1 / 8 - ra*( rb + rc ) + rb*rb + rc*rc ) - 1 / 2 * ( ra - rb - rc ) ) / ( ra + rb + rc ); z = sqrt( ra*ra*( ra*ra - 2 - 4 * ( x*x + y*y - y ) ) + ( pow( 2 * y - 1, 2 ))) / ( 2 * ra ); printf( "\n ra rb rc = %.f %.f %.f ==&gt; x y z1 = %f %f %f ", ra, rb, rc, x, y, z ); </code></pre> <p>}</p> <p>=================================================== UPDATE 4/18/2016:</p> <p>I implemented the latest x,y,z solution from the formulae, but it didn't work. Next, I copy and pasted, as is, the three AWK equations for x,y,z and got the results below, with problems: x looks ok. Y looks ok except for the 9,10,10 case in which y at -8 seems incorrect. z in each case crashes.</p> <p>Here is my 'C' source code:</p> <pre><code>int main() </code></pre> <p>{</p> <pre><code>printf( "\n\n CASE 1: object directly obove TX:" ); generate_3D_vector( 10, 10, 10 ); printf( "\n\n CASE 2: object closer to RX-a:" ); generate_3D_vector( 9, 10, 10 ); printf( "\n\n CASE 3: object closer to RX-b:" ); generate_3D_vector( 10, 9, 10 ); printf( "\n\n CASE 4: object closer to RX-c:" ); generate_3D_vector( 10, 10, 9 ); </code></pre> <p>}</p> <p>void generate_3D_vector( float ra, float rb, float rc ) { float x, y, z; float ra2 = ra*ra; float ra4 = pow( ra, 4 );</p> <pre><code>float C1 = -sqrt( 3.0 ) / 6.0; float C2 = -0.5; float r_a = ra; float r_b = rb; float r_c = rc; x = C1 * ( r_b - r_c ) * ( r_a*( r_a - r_b - r_c ) + 2.0 * r_b*r_c + 1 ) / ( r_a - r_b - r_c ); y = C2 * ( r_a*( r_a*( r_b + r_c ) - r_b*r_b - r_c*r_c + 2.0 ) - r_b - r_c ) / ( r_a - r_b - r_c ); z = sqrt( r_a*r_a * ( r_a*r_a - 4.0 * ( x*x + y*y + y ) - 2 ) + 4 * ( y*y + y ) + 1 ) / ( 2 * r_a ); printf( "\n ra rb rc = %.f %.f %.f ==&gt; x y z1 = %f %f %f ", ra, rb, rc, x, y, z ); </code></pre> <p>}</p> <p>AND HERE IS MY 'C' OUTPUT........................</p> <p>CASE 1: object directly obove TX: ra rb rc = 10 10 10 ==> x y z1 = 0.000000 0.000000 4.950000</p> <p>CASE 2: object closer to RX-a: ra rb rc = 9 10 10 ==> x y z1 = 0.000000 -8.272727 -nan(ind) &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; I would expect y to be greater (closer to 0).</p> <p>CASE 3: object closer to RX-b: ra rb rc = 10 9 10 ==> x y z1 = -2.918826 5.055555 -nan(ind)</p> <p>CASE 4: object closer to RX-c: ra rb rc = 10 10 9 ==> x y z1 = 2.918826 5.055555 -nan(ind)</p>
Nominal Animal
318,422
<p>Rewritten on 2016-04-19. The situation is as follows:</p> <p><a href="https://i.stack.imgur.com/I3fXe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I3fXe.png" alt="Three radars"></a><br> <sub>(source: <a href="http://www.nominal-animal.net/answers/radars.png" rel="nofollow noreferrer">nominal-animal.net</a>)</sub> </p> <p>Let us use a coordinate system where the transmitter <span class="math-container">$t$</span> is at origin, and each receiver is at unit distance from the transmitter, <span class="math-container">$d_t = 1$</span>: <span class="math-container">$$\begin{align} d_t &amp;= 1 \\ d_e &amp;= \sqrt{3} \\ \vec{a} = (x_a, y_a, z_a) &amp;= (0, 1, 0) \\ \vec{b} = (x_b, y_b, z_b) &amp;= (-\sqrt{3/4}, -1/2, 0) \approx (-0.8660254, -0.5, 0 ) \\ \vec{c} = (x_c, y_c, z_c) &amp;= (\sqrt{3/4}, -1/2, 0) \approx (0.8660254, -0.5, 0 ) \end{align}$$</span> If <span class="math-container">$\tau$</span> is the duration a signal takes from the transmitter to any of the receivers (when there are no reflections; i.e. the signal travels the unit distance), we can derive the signal path lengths <span class="math-container">$r$</span> from the time it takes for a signal to reflect from the object and reach a receiver: <span class="math-container">$$\begin{align} r_a &amp;= \frac{d_a}{\tau} \ge 1 \\ r_b &amp;= \frac{d_b}{\tau} \ge 1 \\ r_c &amp;= \frac{d_c}{\tau} \ge 1 \end{align}$$</span> They must all be at least one, since there is no shorter path from the transmitter to each receiver. We shall use these signal path lengths to determine the location of the object reflecting the radar signal.</p> <p>The signal path lengths can be computed using <a href="https://en.wikipedia.org/wiki/Pythagorean_theorem" rel="nofollow noreferrer">Pythagorean theorem</a>. The object is at <span class="math-container">$(x, y, z)$</span>, and the radar signal must first travel from the transmitter to the object. From the object, the signal must travel to each receiver. Thus, we have a set of three equations: <span class="math-container">$$\left \lbrace \begin{align} r_a &amp;= \sqrt{x^2 + y^2 + z^2} + \sqrt{(x - x_a)^2 + (y - y_a)^2 + (z - z_a)^2 } \\ r_b &amp;= \sqrt{x^2 + y^2 + z^2} + \sqrt{(x - x_b)^2 + (y - y_c)^2 + (z - z_b)^2 } \\ r_c &amp;= \sqrt{x^2 + y^2 + z^2} + \sqrt{(x - x_c)^2 + (y - y_b)^2 + (z - z_c)^2 } \end{align} \right.$$</span> Using Maple to solve the above for <span class="math-container">$(x, y, z)$</span> we get two solutions, that only differ by the sign of <span class="math-container">$z$</span>. Since our transmitter and receivers are on the same plane, we can only detect objects in one half-space (either nonnegative <span class="math-container">$z$</span> or nonpositive <span class="math-container">$z$</span>). For simplicity, we select <span class="math-container">$z \ge 0$</span>: <span class="math-container">$$\left \lbrace \begin{align} x &amp;= \frac{ -\sqrt{1/12} (r_b - r_c) (r_a (r_a - r_bc) - 2 r_b r_c - 3)}{r_a + r_b + r_c} \\ y &amp;= \frac{ -1/2 (r_a (r_a r_b r_c - r_b^2 - r_c^2 + 2) - r_b r_c)}{r_a + r_b + r_c} \\ z &amp;= \sqrt{ 1/4 r_a^2 - x^2 - y (y - 1) + (y (y - 1) + 1/4) / r_a^2 - 1/2 } \end{align} \right.$$</span> Note that when the object is very close to the ground, the argument for the square root above may be negative, due to numerical errors. If that occurs, just use <span class="math-container">$z = 0$</span> instead. (If it is negative but not close to zero, there must be something wrong with the assumed signal path lengths, perhaps an incorrect signal.)</p> <p>To be exact, the solution Maple finds for <span class="math-container">$z$</span> is <span class="math-container">$$z = \frac{\sqrt{terms \; / \; 12}}{r_a + r_b + r_c}$$</span> where <span class="math-container">$$terms = - 4 r_a^4 r_b^2 - 4 r_a^4 r_b r_c - 4 r_a^4 r_c^2 + 8 r_a^3 r_b^3 + 4 r_a^3 r_b^2 r_c + 4 r_a^3 r_b r_c^2 + 8 r_a^3 r_c^3 - 4 r_a^2 r_b^4 + 4 r_a^2 r_b^3 r_c - 12 r_a^2 r_b^2 r_c^2 + 4 r_a^2 r_b r_c^3 - 4 r_a^2 r_c^4 - 4 r_a r_b^4 r_c + 4 r_a r_b^3 r_c^2 + 4 r_a r_b^2 r_c^3 - 4 r_a r_b r_c^4 - 4 r_b^4 r_c^2 + 8 r_b^3 r_c^3 - 4 r_b^2 r_c^4 + 3 r_a^4 - 12 r_a^3 r_b - 12 r_a^3 r_c + 30 r_a^2 r_b^2 + 30 r_a^2 r_c^2 - 12 r_a r_b^3 - 12 r_a r_c^3 + 3 r_b^4 - 12 r_b^3 r_c + 30 r_b^2 r_c^2 - 12 r_b r_c^3 + 3 r_c^4 - 30 r_a^2 + 12 r_a r_b + 12 r_a r_c - 30 r_b^2 + 12 r_b r_c - 30 r_c^2 + 27$$</span> Substituting earlier <span class="math-container">$x$</span> and <span class="math-container">$y$</span> into the earlier formula for <span class="math-container">$z$</span> yields this same expression, so the earlier formula for <span class="math-container">$z$</span> is mathematically correct, too. (Note that I moved all terms inside the square root in the earlier formula, including the divisor. This makes the case where the argument of the square root is negative much easier to manage, in cases where you cannot fully trust the radar signals. In other words, if the argument is negative and not close to zero, there was something wrong with the radar signal measurements.)</p> <p>It might be possible to simplify and reorder this latter expression, minimizing numerical errors like <a href="https://en.wikipedia.org/wiki/Loss_of_significance" rel="nofollow noreferrer">cancellation</a>, but I did not find a way to do so in Maple. Testing indicates it is not that important, either: numerical errors in <span class="math-container">$z$</span> are larger than in <span class="math-container">$x$</span> or <span class="math-container">$y$</span>, but quite acceptable.</p> <hr> <p>Here are three awk scripts that can be used for testing.</p> <p>To create random points, <strong>random.awk</strong>:</p> <pre><code>#!/usr/bin/awk -f BEGIN { if (ARGV[1] == "-h" || ARGV[1] == "--help" || ARGV[1] == "") { printf "\n" &gt; "/dev/stderr" printf "Usage: %s [ -h | --help ]\n", ARGV[0] &gt; "/dev/stderr" printf " %s POINTS [ RANGE [ SEED ]]\n", ARGV[0] &gt; "/dev/stderr" printf "\n" &gt; "/dev/stderr" printf "Default range is 10, and seed is automatically randomized.\n" &gt; "/dev/stderr" printf "\n" &gt; "/dev/stderr" exit(0) } N = int(ARGV[1]) if (N &lt; 1) { printf "%s: Invalid number of points.\n", ARGV[1] &gt; "/dev/stderr" exit(1) } if (length("" ARGV[2]) &gt; 0) { range = 1.0 * ARGV[2] if (range &lt;= 0) { printf "%s: Invalid range.\n", ARGV[2] &gt; "/dev/stderr" exit(1) } } else range = 10.0 if (length("" ARGV[3]) &gt; 0) srand(ARGV[3]); else srand(); for (i = 0; i &lt; N; i++) printf "%12.6f %12.6f %12.6f\n", range*(2*rand()-1), range*(2*rand()-1), range*rand() exit(0) } </code></pre> <p>To add the radar signal paths for each point, <strong>blip.awk</strong>:</p> <pre><code>#!/usr/bin/awk -f BEGIN { x_a = 0.0 y_a = 1.0 z_a = 0.0 x_b = -sqrt(0.75) y_b = -0.5 z_b = 0.0 x_c = sqrt(0.75) y_c = -0.5 z_c = 0.0 } NF &gt;= 3 { x = <span class="math-container">$1 * 1.0 y = $</span>2 * 1.0 z = $3 * 1.0 r_a = sqrt(x*x + y*y + z*z) + sqrt( (x - x_a)*(x - x_a) + (y - y_a)*(y - y_a) + (z - z_a)*(z - z_a) ) r_b = sqrt(x*x + y*y + z*z) + sqrt( (x - x_b)*(x - x_b) + (y - y_b)*(y - y_b) + (z - z_b)*(z - z_b) ) r_c = sqrt(x*x + y*y + z*z) + sqrt( (x - x_c)*(x - x_c) + (y - y_c)*(y - y_c) + (z - z_c)*(z - z_c) ) printf "%12.6f %12.6f %12.6f %12.6f %12.6f %12.6f\n", x, y, z, r_a, r_b, r_c } </code></pre> <p>To estimate the object position based on the radar signal paths, and to calculate the error ranges, <strong>radars.awk</strong>:</p> <pre><code>#!/usr/bin/awk -f BEGIN { C1 = -sqrt(1.0/12) x_a = 0.0 y_a = 1.0 z_a = 0.0 x_b = -sqrt(0.75) y_b = -0.5 z_b = 0.0 x_c = sqrt(0.75) y_c = -0.5 z_c = 0.0 } NF &gt;= 6 { r_a = <span class="math-container">$4 * 1.0 r_b = $</span>5 * 1.0 r_c = $6 * 1.0 r_abc = r_a + r_b + r_c r_bc = r_b + r_c x = C1 * (r_b - r_c) * (r_a * (r_a - r_bc) - 2.0 * r_b * r_c - 3.0) / r_abc; y = -0.5 * (r_a * (r_a * r_bc - r_b*r_b - r_c*r_c + 2.0) - r_bc) / r_abc; t1 = r_a * r_a t2 = y * (y - 1.0) s = 0.25 * t1 - x*x - t2 + (t2 + 0.25) / t1 - 0.5 if (s &gt; 0.0) z = sqrt(s) else z = 0 err_x = <span class="math-container">$1 - x err_y = $</span>2 - y err_z = $3 - z if (minerr_x &gt; err_x) minerr_x = err_x if (minerr_y &gt; err_y) minerr_y = err_y if (minerr_z &gt; err_z) minerr_z = err_z if (maxerr_x &lt; err_x) maxerr_x = err_x if (maxerr_y &lt; err_y) maxerr_y = err_y if (maxerr_z &lt; err_z) maxerr_z = err_z err_a = sqrt(x*x + y*y + z*z) + sqrt( (x - x_a)*(x - x_a) + (y - y_a)*(y - y_a) + (z - z_a)*(z - z_a) ) - r_a err_b = sqrt(x*x + y*y + z*z) + sqrt( (x - x_b)*(x - x_b) + (y - y_b)*(y - y_b) + (z - z_b)*(z - z_b) ) - r_b err_c = sqrt(x*x + y*y + z*z) + sqrt( (x - x_c)*(x - x_c) + (y - y_c)*(y - y_c) + (z - z_c)*(z - z_c) ) - r_c if (minerr_a &gt; err_a) minerr_a = err_a if (minerr_b &gt; err_b) minerr_b = err_b if (minerr_c &gt; err_c) minerr_c = err_c if (maxerr_a &lt; err_a) maxerr_a = err_a if (maxerr_b &lt; err_b) maxerr_b = err_b if (maxerr_c &lt; err_c) maxerr_c = err_c printf "%12.6f %12.6f %12.6f %12.6f %12.6f %12.6f %12.6f %12.6f %12.6f %+.6f %+.6f %+.6f %+.6f %+.6f %+.6f\n", <span class="math-container">$1, $</span>2, $3, r_a, r_b, r_c, x, y, z, err_x, err_y, err_z, err_a, err_b, err_c } END { printf "Errors in object coordinates:\n" &gt; "/dev/stderr" printf " x: %12.6f .. %12.6f\n", minerr_x, maxerr_x &gt; "/dev/stderr" printf " y: %12.6f .. %12.6f\n", minerr_y, maxerr_y &gt; "/dev/stderr" printf " z: %12.6f .. %12.6f\n", minerr_z, maxerr_z &gt; "/dev/stderr" printf "Resulting errors in radar signal path lengths:\n" &gt; "/dev/stderr" printf " Radar 1: %12.6f .. %12.6f\n", minerr_a, maxerr_a &gt; "/dev/stderr" printf " Radar 2: %12.6f .. %12.6f\n", minerr_b, maxerr_b &gt; "/dev/stderr" printf " Radar 3: %12.6f .. %12.6f\n", minerr_c, maxerr_c &gt; "/dev/stderr" } </code></pre> <p>These can be chained to run a set of tests. For example, to generate 20 test points within -500..500, -500..500, 0..500, run</p> <pre><code>./random.awk 20 500 | ./blip.awk | ./radars.awk </code></pre> <p>which outputs something like</p> <pre><code> -42.470214 -460.120727 181.664547 993.936046 992.472463 992.620758 -42.470204 -460.120604 181.664860 -0.000010 -0.000123 -0.000313 +0.000000 +0.000000 +0.000000 -217.450030 -2.192033 96.677738 475.977300 475.170850 476.753487 -217.450114 -2.192003 96.677549 +0.000084 -0.000030 +0.000189 +0.000000 +0.000000 +0.000000 -468.288971 -111.369017 210.325364 1050.802046 1049.711293 1051.255690 -468.288923 -111.369260 210.325343 -0.000048 +0.000243 +0.000021 -0.000000 -0.000000 -0.000000 276.991607 482.728213 344.960062 1308.840645 1310.312838 1309.580551 276.991665 482.728308 344.959883 -0.000058 -0.000095 +0.000179 +0.000000 +0.000000 +0.000000 -202.699862 -150.804663 180.859238 621.902211 620.608527 621.739364 -202.699938 -150.804740 180.859089 +0.000076 +0.000077 +0.000149 +0.000000 +0.000000 +0.000000 -167.340633 -114.491254 28.985180 410.200868 408.653227 410.070261 -167.340621 -114.491243 28.985295 -0.000012 -0.000011 -0.000115 +0.000000 +0.000000 +0.000000 -321.876870 435.167162 392.943801 1337.079259 1337.638721 1338.471828 -321.877012 435.167208 392.943634 +0.000142 -0.000046 +0.000167 +0.000000 +0.000000 -0.000000 29.890078 79.345123 265.040198 556.260754 556.781522 556.595574 29.889988 79.345131 265.040206 +0.000090 -0.000008 -0.000008 -0.000000 -0.000000 -0.000000 -250.865895 230.421345 453.245799 1133.542210 1133.768749 1134.534845 -250.866017 230.421209 453.245800 +0.000122 +0.000136 -0.000001 +0.000000 -0.000000 +0.000000 190.106943 -261.430075 302.619440 886.175382 885.661968 884.917840 190.107006 -261.429917 302.619537 -0.000063 -0.000158 -0.000097 +0.000000 +0.000000 +0.000000 10.765294 -241.908998 255.620615 704.897282 703.893734 703.840727 10.765200 -241.909083 255.620539 +0.000094 +0.000085 +0.000076 -0.000000 +0.000000 +0.000000 152.144771 -485.665604 403.917929 1300.238177 1299.320168 1298.914357 152.144690 -485.665479 403.918110 +0.000081 -0.000125 -0.000181 +0.000000 +0.000000 +0.000000 54.557318 -490.596258 340.339875 1199.972247 1198.824162 1198.666450 54.557322 -490.595937 340.340337 -0.000004 -0.000321 -0.000462 +0.000000 +0.000000 +0.000000 -165.436771 283.870165 372.324676 992.559940 993.129242 993.705963 -165.436869 283.870154 372.324641 +0.000098 +0.000011 +0.000035 +0.000000 +0.000000 +0.000000 402.780470 490.265358 162.968235 1309.444109 1311.098958 1310.034629 402.780615 490.265466 162.967552 -0.000145 -0.000108 +0.000683 +0.000000 +0.000000 -0.000000 246.386135 475.243112 130.961682 1101.341321 1103.022096 1102.248338 246.386223 475.243160 130.961342 -0.000088 -0.000048 +0.000340 +0.000000 -0.000000 -0.000000 467.384508 -385.128184 299.503320 1351.828069 1351.572289 1350.373591 467.384503 -385.128265 299.503224 +0.000005 +0.000081 +0.000096 +0.000000 -0.000000 +0.000000 237.992954 -186.075437 391.296435 989.068560 988.920971 988.086793 237.992979 -186.075157 391.296553 -0.000025 -0.000280 -0.000118 +0.000000 +0.000000 +0.000000 337.241040 -420.753256 295.082717 1230.056531 1229.505282 1228.554482 337.240889 -420.753160 295.083027 +0.000151 -0.000096 -0.000310 +0.000000 +0.000000 +0.000000 117.893074 138.442193 85.819730 401.457263 402.997179 401.983383 117.893083 138.442251 85.819625 -0.000009 -0.000058 +0.000105 +0.000000 -0.000000 +0.000000 Errors in object coordinates: x: -0.000145 .. 0.000151 y: -0.000321 .. 0.000243 z: -0.000462 .. 0.000683 Resulting errors in radar signal path lengths: Radar 1: -0.000000 .. 0.000000 Radar 2: -0.000000 .. 0.000000 Radar 3: -0.000000 .. 0.000000 </code></pre> <p>To only look at the errors, testing say a million points, you can run</p> <pre><code>./random.awk 1e6 500 | ./blip.awk | ./radars.awk &gt;/dev/null </code></pre> <p>which outputs something like</p> <pre><code>Errors in object coordinates: x: -0.000470 .. 0.000476 y: -0.000497 .. 0.000509 z: -0.549315 .. 0.544149 Resulting errors in radar signal path lengths: Radar 1: -0.000000 .. 0.000584 Radar 2: -0.000000 .. 0.000584 Radar 3: -0.000000 .. 0.000583 </code></pre> <p>As you can see, the <span class="math-container">$z$</span> coordinate has larger errors compared to <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. The small errors in radar signal path lengths shows that the calculated coordinates do yield the correct signal path lengths; i.e., that the results should be correct.</p> <p>In C, the function to compute the object position given radar signal path lengths is for example</p> <pre><code>typedef struct { double x; double y; double z; } vec3d; #define SQRT1OF12 0.2886751345948128822545743902509787278239 vec3d detect(const double r_a, const double r_b, const double r_c) { const double r_abc = r_a + r_b + r_c; const double t1 = r_b + r_c; vec3d p; if (r_a &lt; 1.0 || r_b &lt; 1.0 || r_c &lt; 1.0) { /* Impossible signal path lengths! */ p.x = 0.0; p.y = 0.0; p.z = -1.0; return p; } p.x = -SQRT1OF12 * (r_b - r_c) * (r_a * (r_a - t1) - 2.0 * r_b * r_c - 3.0) / r_abc; p.y = -0.5 * (r_a * (r_a * t1 - r_b*r_b - r_c*r_c + 2.0) - t1) / r_abc; { const double t2 = r_a * r_a; const double t3 = p.y * (p.y - 1.0); const double s = 0.25 * t2 - p.x * p.x - t3 + (t3 + 0.25) / t2 - 0.5; if (s &gt; 0.0) p.z = sqrt(s); else p.z = 0.0; } return p; } </code></pre> <p>I am quite puzzled about the differences to earlier versions to this answer with regards to the actual solution formula. I'm quite sure I used the same Maple recipe, but on other days it yields a quite different solution, one with <span class="math-container">$r_a - r_b - r_c$</span> in the divisor for <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, instead of <span class="math-container">$r_a + r_b + r_c$</span> as here.</p> <p>These results have been tested for several million random points <span class="math-container">$(-1000\dots1000, -1000\dots1000, 0\dots1000)$</span>, and the errors in both estimated object coordinates <span class="math-container">$(-0.001\dots0.001, -0.001\dots0.001, -1\dots1)$</span> and in the signal path lengths using the estimated object coordinates (<span class="math-container">$0\dots0.001$</span>) seem acceptable. </p>
3,430,066
<p><strong>Question:</strong></p> <p>Calculate the integral </p> <p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2}$$</span></p> <p><strong>Attempted solution:</strong></p> <p>I initially had two approaches. First was recognizing that the denominator looks like a quadratic equation. Perhaps we can factor it.</p> <p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2} = \int_0^1 \frac{dx}{e^{-2x}(e^x+1)(e^x+e^2x-1)}$$</span></p> <p>To me, this does not appear productive. I also tried factoring out <span class="math-container">$e^x$</span> with a similar unproductive result.</p> <p>The second was trying to make it into a partial fraction. To get to a place where this can efficiently be done, I need to do a variable substitution:</p> <p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2} = \Big[ u = e^x; du = e^x \, dx\Big] = \int_1^e \frac{u}{u^3+2u^2 - 1} \, du$$</span></p> <p>This looks like partial fractions might work. However, the question is from a single variable calculus book and the only partial fraction cases that are covered are denominators of the types <span class="math-container">$(x+a), (x+a)^n, (ax^2+bx +c), (ax^2+bx +c)^n$</span>, but polynomials with a power of 3 is not covered at all. Thus, it appears to be a "too difficult" approach.</p> <p>A third approach might be to factor the new denominator before doing partial fractions:</p> <p><span class="math-container">$$\int_1^e \frac{u}{u^3+2u^2 - 1} \, du = \int_1^e \frac{u}{u(u^2+2u - \frac{1}{u})} \, du$$</span></p> <p>However, even this third approach does not have a denominator that is suitable or partial fractions, since it lacks a u-free term.</p> <p>What are some productive approaches that can get me to the end without restoring to partial fractions from variables with a power higher than <span class="math-container">$2$</span>?</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>If you put <span class="math-container">$u=e^x $</span>, the integral becomes</p> <p><span class="math-container">$$\int_1^e\frac{u\,du}{u^3-1+2u^2}$$</span> but</p> <p><span class="math-container">$$u^3+2u^2-1=(u+1)(u^2+au+b)$$</span> with <span class="math-container">$$1+a=2$$</span> <span class="math-container">$$b=-1$$</span> hence <span class="math-container">$$u^3+2u^2-1=(u+1)(u^2+u-1)$$</span> <span class="math-container">$$=(u+1)(u-\frac{-1-\sqrt{5}}{2})(u-\frac{-1+\sqrt{5}}{2})$$</span></p> <p>Now use partial fraction decomposition.</p>
1,592,224
<p>I need to understand how to find $a \times b = 72$ and $a + b = -17$. Or I am fine with any other example, even general form $a \times b = c$ and $a + b = d$, how to find $a$ and $b$.</p> <p>Thanks!</p>
seeker
267,945
<p>Since $a+b=-17\implies a=-17-b$. So put it back in the second equation</p> <p>$(-17-b)b=72\implies b^2+17b+72=0$ which is a quadratic equation in $b$. Solving it you will get $b=-9$ or $b=-8$ thus giving $a=-8$ or $a=-9$.</p>