qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
136,453
<p>For every $k\in\mathbb{N}$, let $$ x_k=\sum_{n=1}^{\infty}\frac{1}{n^2}\left(1-\frac{1}{2n}+\frac{1}{4n^2}\right)^{2k}. $$ Calculate the limit $\displaystyle\lim_{k\rightarrow\infty}x_k$.</p>
Thomas Klimpel
12,490
<p>The prime numbers have connections to pseudo random numbers. They might also have connections to "true randomness", but I'm not aware if there has been much progress on the conjectures which point in this direction. I wonder whether the Riemann Hypothesis is equivalent to a statement about the relation of the prime numbers to "true randomness", or whether it would at least imply such a statement.</p>
497,015
<p>So this is an excercise.. Does anyone have a hint? </p> <p><strong>If A is both orthogonal and a orthogonal projector. What can you then conlcude about A?</strong></p> <p>I know that an $n\times n$ matrix $P$ is an orthogonal projector if it is both idempotent ($P^2 = P$) and symmetric ($P = P^T$ ). Such a matrix projects any given $n$-vector orthogonally onto a subspace (namely, the column space of $P$) but leaves unchanged any vector that is already in that subspace. </p> <p>Furthermore, due to orthogonality: $A^TA=I$</p>
Berci
41,488
<p>Very good.</p> <p>So, if $P$ is in addition orthogonal, then $P=P^2=P^TP=I$.</p>
497,015
<p>So this is an excercise.. Does anyone have a hint? </p> <p><strong>If A is both orthogonal and a orthogonal projector. What can you then conlcude about A?</strong></p> <p>I know that an $n\times n$ matrix $P$ is an orthogonal projector if it is both idempotent ($P^2 = P$) and symmetric ($P = P^T$ ). Such a matrix projects any given $n$-vector orthogonally onto a subspace (namely, the column space of $P$) but leaves unchanged any vector that is already in that subspace. </p> <p>Furthermore, due to orthogonality: $A^TA=I$</p>
user1551
1,551
<p>It actually suffices to assume that $P$ is idempotent (instead of being a <em>orthogonal</em> projector) and $P$ is an orthogonal matrix: $P=P^TP^2=P^TP=I$.</p>
1,731,382
<p>Notice that the parabola, defined by certain properties, is also the trajectory of a cannon ball. Does the same sort of thing hold for the catenary? That is, is the catenary, defined by certain properties, also the trajectory of something?</p>
Oscar Lanzi
248,217
<p>A freely suspended chain or string forms a <a href="https://en.wikipedia.org/wiki/Catenary" rel="nofollow">catenary</a>.</p>
123,202
<blockquote> <p>Let $X,Y$ be vectors in $\mathbb{C}^n$, and assume that $X\ne0$. Prove that there is a symmetric matrix $B$ such that $BX=Y$.</p> </blockquote> <p>This is an exercise from a chapter about bilinear forms. So the intended solution should be somehow related to it.</p> <p>Pre-multiplying both sides by $Y^t$, we get $Y^tBX=Y^tY$. The left hand side is a bilinear form $\langle Y,X\rangle $ with $B$ as the matrix of the form with respect to the standard basis. Am I correct here?</p> <p>If so, then it suffices to find a bilinear form $\langle\cdot,\cdot\rangle\colon\mathbb{C}^n\times\mathbb{C}^n\rightarrow\mathbb{C}$ such that $\langle Y,X\rangle=Y^tY$. If $Y=0$, any bilinear form will do, because $\langle0,X\rangle=0\langle 0,X\rangle =0$ by linearity in the first variable. If $Y\ne0$, it suffices to find a bilinear form such that $\langle Y,X\rangle$ is nonzero, then we can multiply by the appropriate factor. This should be very near to a complete solution, but I can't figure out the rest.</p> <p><em><strong>Edit</em></strong>: Okay, my approach seems to be completely wrong. Using Phira's hint, I think I managed to make a complete proof.</p> <p>Choose an orthonormal basis $(v_1,\ldots,v_n)$ such that $v_1=\frac{X}{\|X\|}$, which can be done by Gram-Schmidt process. Let $P$ be the $n\times n$ matrix whose $i$-th column is the vector $v_i$. Then $P$ is orthogonal. Let $P^{-1}Y=(a_1,\ldots,a_n)^t$. Choose such that the first column and the first row is the vector $\frac1{\|X\|}(a_1,\ldots,a_n)$, and 0 everywhere else. Clearly $M$ is symmetric and it's easy to check that $(PMP^{-1})X=Y$. So the desired matrix is $B=PMP^{-1}$, which is symmetric because $P$ is orthogonal. $\Box$</p> <p>However, this solution does not make use of bilinear forms. So there might be a simpler way.</p>
Phira
9,325
<p>I propose that you choose a basis containing $X$ and think about what the equation tells you about $B$ in that basis. </p> <p>You can very easily find a symmetric $B$ in that basis. </p> <p>Now, you have to just think about what kind of basis change does not destroy symmetry and choose your basis accordingly.</p> <p>A basis change takes $B$ to $TBT^{-1}$, if $T$ is orthogonal, this is also $TBT^{t}$ which is easily seen to be symmetric.</p> <p>You can ensure that the basis change matrix is orthogonal by choosing the original basis as orthonomal. (The basis change matrix between the standard basis and an orthonormal basis is orthogonal.)</p>
1,279,564
<p>I try to be rational and keep my questions as impersonal as I can in order to comply to the community guidelines. But this one is making me <strong>mad</strong>. Here it goes. Consider the uniform distribution on $[0, \theta]$. The likelihood function, using a random sample of size $n$ is $\frac{1}{\theta^{n}}$.<br> Now $1/\theta^n$ is decreasing in $\theta$ over the range of positive values. Hence it will be maximized by choosing $\theta$ as small as possible while still satisfying $0 \leq x_i \leq \theta$.The textbook says 'That is, we choose $\theta$ equal to $X_{(n)}$, or $Y_n$, the largest order statistic'.But if we want to <strong>minimize</strong> <em>theta</em> to maximize the likelihood, why we choose the biggest x? Suppose we had real numbers for x like $X_{1} = 2, X_{2} = 4, X_{3} = 8$.If we choose 8, that yields $\frac{1}{8^{3}}=0.001953125$. If we choose $\frac{1}{2^{3}}=0.125$. Therefore why we want the maximum in this case $X_{n}$ and not $X_{1}$, since we`ve just seen with real numbers that the smaller the x the bigger the likelihood? Thanks!</p>
Antoni Parellada
152,225
<p>$\theta$ is the parameter to estimate, which corresponds to the upper bound of the $U(0,\theta)$. The observed samples are $x_1=2,\,x_2=4$, and $x_3=8$. The likelihood function to maximize is $\mathcal{L}(\theta|X) = \frac{1}{\theta^n}$ with $X$ corresponding to the observed values (a vector, really), $\theta$ the upper margin of the interval for which this uniform distribution is defined, and $n$ the number of samples.</p> <p>In order to "see" this intuitively, it's important to realize that $\theta$ has to be at least as big as your largest sampled value ($8$) to avoid leaving samples out of the interval for which your probability density function is defined.</p> <p>Picking out $8$ would render the value of the $pdf$ at any point equal to $\frac{1}{8}$, and the joint probability distribution (you are sampling three observations - a vector), $\frac{1}{8^3}$. That would certainly the maximum $\mathcal{L}(\theta|X)$, because it is the largest denominator possible (compare to $\frac{1}{2^3}$ or $\frac{1}{4^3}$).</p>
370,599
<p>If A is an invertible $nxn$ matrix prove that:$ adj(adjA)=(A)(detA)^{n-2}$ I have done this but it somewhere went wrong: $ adj(adjA)=adj(A^{-1} detA)=(A^{-1}detA)^{-1} det(A^{-1}detA)=AdetA det(A^{-1}detA)= Adet(AA^{-1}detA)=A (detA)^n $ </p>
anon
11,763
<p>Let $X$ and $Y$ be subsets of the naturals. For any given $y\in Y$, the number of pairs $a,b\in X$ such that $a+b=y$ is at most equal to the number of $a\in X$ such that $a\le y$. Thus the number of pairs of primes adding to $m$ grows asymptotically no more than $\frac{\log m}{m}$ by the prime number theorem; at any rate all we need to accept is that primes grow sublinearly, i.e. $o(m)$.</p> <p>The number of ordered pairs of positive odd numbers that add up to an even number $m$ may be computed exactly as $\frac{m}{2}$, which grows linearly in the value $m$. Whether or not $m$ is restricted to perfect cubes or other special sets is of no meaningful significance. Similarly, if we impose further modular-arithmetic conditions on $a,b$ besides being odd numbers, and force them to always be distinct, and view the pairs as unordered, their count will still grow linearly with $m$ all the same.</p> <p>It is at face value impossible for a linearly growing collection to fit snugly inside another collection (prime pairs) that grows sublinearly. This is the conceptual explanation of why one would immediately expect a conjecture of this form to be false simply on statistical grounds.</p>
31,414
<p>I'm experimenting with different algorithms that approximate pi via iteration and comparing the result to pi. I want to both visualise and perhaps know the function (if any) that describes the increasing trend in accuracy as the number of iterations rises. </p> <p>For example, 1 iteration might give me 3.0, 10 iterations might give me 3.12, 50 iterations might give me 3.1409 etc. The more iterations, the better. Sometimes it might be a case of diminishing returns, running 1000 iterations to get an additional decimal point of correctness - is it worth it? Knowing the sweet spot at which to quit would be nice to know, for each pi approximation algorithm type.</p> <p>I'm a mature programmer who is rekindling an interest in math, and this project idea interests me. Trouble is, I don't know what the correct math and statistical terminology is to describe what I want to do. Could someone please locate my problem within the history of math and possibly illustrate with mathematica?</p> <p><strong>UPDATE</strong>: To be more specific. My pi generation algorithm produces the following pairs of numbers, where the first number is the number of iterations it looped and the second number is the abs(result - pi) i.e. how much the result differs from pi. The more iterations, the smaller the second number. </p> <pre><code>data = {{10, 0.19809}, {50, 0.039984}, {100, 0.019998}, {500, 0.00399998}, {1000, 0.002}, {20000, 0.0001}, {100000, 0.00002}, {500000, 4.*10^-6}} </code></pre> <p>I want to visualize what is going on with the data and find a function that describes what is going on. I'd like to also know what is possible in this area with this sort of data.</p> <p>What I've tried:</p> <pre><code>ListPlot[data, Joined -&gt; True] </code></pre> <p>gives me an an almost unreadable graph due to the small numbers involved.</p> <pre><code>GeneralizedLinearModelFit[data,{x},{x}] </code></pre> <p>gives me FittedModel[0.0405405 -&lt;&lt;22>> x] which when plotted Show[ListPlot[data,PlotStyle->Red],Plot[%36[x],{x,10,500000}]] gives me a straight line. Not helpful as it doesn't seem to tell me anything about the trend re gradually converging on pi.</p> <p>I think there would be a curve of some sort that would better fit the data, but functions like FindFit seem to require you supply an expression e.g. Log and various parameters a, b etc - but how do I know what expression and what parameters to supply?</p> <p>Should I be looking at Interpolation instead? </p> <p><strong>UPDATE 2</strong>:</p> <p>Thanks for the answers so far. What's holding me back from accepting an answer is the fact that I'm expecting a curved graph - not a straight line. Because the whole point of more and more iterations is that it gets closer to pi and so the answer must be a curve. And that curve must have a formulae, which it would be nice to have defined.</p> <p>So I did some experimenting in mathematica and came up with a curve after all. Here's how I did it.</p> <p>My data came from a crude Wells / Gregory / Leibniz algorithm (see <a href="http://mathworld.wolfram.com/GregorySeries.html" rel="nofollow noreferrer">Gregory Series</a>) where I get pi by alternately adding and subtracting 1/n where n is 3, 5, 7... for example </p> <pre><code>N[(1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + 1/13 - 1/15 + 1/17) * 4] </code></pre> <p>which gives me 3.25237 - not bad for 8 numbers in the series, or 8 iterations if its a program that's generating the result. So I went on to define a Mathematica function that takes the number of iterations you want and generates pi. The following function also happens to subtract from real pi and return the discrepancy difference. The higher the iterations <em>it_</em> passed to this function, the smaller the difference from pi and the smaller the result coming out of this function:</p> <pre><code>f[it_]:=Abs[N[Sum[(-1)^(n-1)/(2n-1),{n,it}]]*4-\[Pi]] </code></pre> <p>So my original data points were for 10, 50, 100, 1000, 20000 and 100000 iterations. I thought those would be representative. Thus I can now generate my original data in Mathematica with</p> <pre><code>data = {{10,f[10]},{50,f[50]},{100,f[100]},{500,f[500]},{1000,f[1000]},{20000,f[20000]},{100000,f[100000]}} output: {{10,0.099753},{50,0.019998},{100,0.00999975},{500,0.002},{1000,0.001},{20000,0.00005},{100000,0.00001}} </code></pre> <p>then plot it using the recommended techniques offered as comments and answers on this page</p> <pre><code>ListLogLogPlot[data,Joined-&gt;True,GridLines-&gt;Automatic,PlotStyle-&gt;Thick] </code></pre> <p>and I of course get a straight line like everybody else. </p> <p>Hmmm - remember I am after a curve. My instinct tells me it should be a curve.</p> <p>So then I discover that if I generate a more gradual list of points like this:</p> <pre><code>data = Table[f[x],{x,20}] ListPlot[data] </code></pre> <p>then I finally get my curve.</p> <p><img src="https://i.imgur.com/XkKxiHF.png" alt="curved graph"></p> <p>Yey! It looks exactly like I imagined it would - curving gradually towards perfection. And if I change the iterations from 20 to 1000 or any higher number, I get the same curve, only smoother. So this is the function I want a precise definition of.</p> <p>Interestingly, if I use the ListLogLogPlot everybody is recommending on my more gradually spaced data</p> <pre><code>ListLogLogPlot[data, Joined -&gt; True, GridLines -&gt; Automatic, PlotStyle -&gt; Thick] </code></pre> <p>I get a straight line again! Grrr.</p> <p><img src="https://i.imgur.com/5tPbiig.png" alt="straight line again"></p> <p>What I suspect is going on here is that there <em>was</em> probably a curve in the normal ListPlot of my original data but we couldn't see it due to the scaling, which is why we went for the ListLogLogPlot. But that probably turns curves into straight lines (hence the log in the name?) - which disappointed me because I was expecting a visual curve.</p> <p>And I don't think my original choice of x values (representing the number of iterations being fed into my pi calculation function) and how those x values are spaced makes a difference. My x values of 10, 50, 100, 1000, 20000 and 100000 certainly made it hard to see the curve in the graph, but the curve was still probably there. My second, gentler set of x data points (from the same pi generation function) of 1..20 makes the curve ridiculously easy to see.</p> <p>So what counts as an answer to this question? </p> <ul> <li>An exact definition of the curve being seen here. </li> <li>And perhaps some commentary on what functions describing curves typically arise when attempting to visualise pi using iterative techniques.</li> </ul> <p>I hope I haven't changed the goal posts in this question. I think I have always asked for an exact definition of the curve in my data plus some commentary of what might be typical / best practice in this sort of visualisation/graphing territory, where there is a known value we are iterating towards.</p>
Arnoud Buzing
105
<p>There are many examples on the Wolfram Demonstrations web site on this topic:</p> <p><a href="http://demonstrations.wolfram.com/search.html?query=pi%20approximations">http://demonstrations.wolfram.com/search.html?query=pi%20approximations</a></p> <p>These examples come with the code that generated them and allow you to make your own variations.</p>
1,548,667
<p>Consider the following steady state problem</p> <p>$$\Delta T = 0,\,\,\,\, (x,y) \in \Omega, \space \space 0 \leq x \leq 4 ,\space \space \space\space 0 \leq y \leq 2 $$</p> <p>$$ T(0,y) = 300, \space \space T(4,y) = 600$$</p> <p>$$ \frac{\partial T}{\partial y}(x,0) = 0, \space \space \frac{\partial T}{\partial y}(x,2) = 0$$</p> <p>I want to derive the analytical solution to this problem.</p> <p>1) Use separation of variables.</p> <p>$$\frac{X^{''}}{X}= -\frac{Y^{''}}{Y} = -\lambda $$ $$X^{''} + \lambda X = 0 \tag{1}$$ $$Y^{''} - \lambda Y = 0 \tag{2}$$</p> <p>The solution to $(2$) is</p> <p>$Y(y) = C_1 \cos(ay)+C_2 \sin(ay)$</p> <p>We find that $$C_2 = 0$$ and $$Y^{'}(2) = C_1\alpha \sin(2\alpha) = 0 \tag{3}$$</p> <p>with $(3)$ giving that $\alpha = n\frac{\pi}{2}$</p> <p>so $Y$ is given by</p> <p>$$Y(y) = C_1sin(\frac{n\pi}{2}y)$$</p> <p>The solution to $(1)$ is</p> <p>$$X(x) = Ae^{\alpha x}+Be^{-\alpha x}$$</p> <p>where $\alpha$ is given by $\alpha = n\frac{\pi}{2}$ </p> <p>So the solution is:</p> <p>$$u(x,y) = X(x)Y(y) = C_1sin(\alpha y)(Ae^{\alpha x}+Be^{-\alpha x}) \tag{4}$$</p> <p>where $\alpha$ is given by $\alpha = n\frac{\pi}{2}$ </p> <p>Inserting the B.C. in $(4)$ gives:</p> <p>$$u(0,y) \implies E_n = \frac{300}{sin(\alpha y)}$$</p> <p>$$u(4,y) \implies F_n = \frac{600}{G sin(\alpha y)Ae^{4 \alpha }+Hsin(\alpha y)e^{-4 \alpha }}$$</p> <p>This is how far I have come. How do I continue?</p>
Hosein Rahnama
267,844
<p><strong>Hints</strong></p> <p>1) Start from here </p> <p>$$Y^{''} - \lambda Y = 0\\ Y^{'}(0)=Y^{'}(4)=0$$</p> <p>find the eigenvalues $\lambda_i$ and the eigenfunctions $Y_i(y)$.</p> <p>2) Find the $X_i(x)$ </p> <p>$$X^{''} + \lambda X = 0$$</p> <p>3) Form the infinite sum</p> <p>$$T(x,y) = \sum\limits_{i = 1}^\infty {{X_i}(x){Y_i}(y)} $$</p> <p>4) Apply boundary conditions</p> <p>$$\eqalign{ &amp; \sum\limits_{i = 1}^\infty {{X_i}(0){Y_i}(y)} = 100 \cr &amp; \sum\limits_{i = 1}^\infty {{X_i}(2){Y_i}(y)} = 200 \cr} $$</p> <p>5) Find unknown constants using the orthogonality property of $Y_{i}(y)$</p> <p>$$\int\limits_0^4 {{Y_i}(y){Y_j}(y)dy} = {\delta _{ij}}\int\limits_0^4 {Y_i^2(y)dy} $$</p>
1,341,385
<p>I want to be a mathematician or computer scientist. I'm going to be a junior in high school, and I skipped precalc/trig to go straight to AP Calc since I've studied a lot of analysis and stuff on my own. My dad wants me to memorize about 30 trig identities (though some of them are very similar) since I'm missing trig. I've gone through and proved all of them, but memorizing them seems like a waste of effort. My dad is a physicist, so he is good at math, but I think he may be wrong here. Can't one just use deMoivre's theorem to get around memorizing the identities?</p>
David K
139,123
<p>In favor of rote memorization, unless all your exams are open-notes you will find that it is very awkward when you have $n$ minutes left to complete $m$ problems and you are busy re-discovering each trig identity that you need to use as you go along. Working out the identity to use in one part of the problem can easily take longer than the time allotted to solve the entire problem.</p> <p>For other purposes, such as solving real-life math problems (where usually you are in a work environment that provides access to reference books, or even better, to the Internet), completely memorizing all of those formulas by rote is probably unnecessary. What is useful is to be <em>aware</em> that there are identities to deal with certain trigonometric expressions that you may encounter, as a prompt for you either to derive them or to look them up.</p> <p>If you encounter certain identities very frequently, the time you might spend deriving them or even looking them up on every occasion is wasteful, but in those cases it it likely you will find that you have memorized the formula after using it some number of times without consciously trying to memorize it.</p> <p>Putting the effort in to know these identities (being able to write them out from memory, but also knowing how they can be derived) is a good investment at this time. But there are options in exactly <em>how</em> you remember the identities; if you cannot simply write out the formula by rote, but you can doodle a small diagram in a couple of seconds (perhaps illustrating part of the derivation of the formula), then write out that formula with that aid, I think that is good enough.</p>
1,531,646
<p>Find the following limit</p> <p>$$ \lim_{x\to0}\left(\frac{1+x2^x}{1+x3^x}\right)^\frac1{x^2} $$</p> <p>I have used natural logarithm to get</p> <p>$$ \exp\lim_{x\to0}\frac1{x^2}\ln\left(\frac{1+x2^x}{1+x3^x}\right) $$</p> <p>After this, I have tried l'opital's rule but I was unable to get it to a simplified form.</p> <p>How should I proceed from here? Any here is much appreciated!</p>
marty cohen
13,079
<p>$\lim_{x\to0} f(x)\\ \text{where } f(x) =\left(\frac{1+xa^x}{1+xb^x}\right)^\frac1{x^2} $</p> <p>Let $g(x, a) =x a^x $. For small $x$, $g(x, a) =xe^{x \ln a} \approx x(1+x \ln a + x^2 \ln^2a/2 + O(x^3)) = x(1+x \ln a + O(x^2)) $ so</p> <p>$\begin{array}\\ \frac{1+xa^x}{1+xb^x} &amp;\approx \frac{1+x(1+x \ln a + O(x^2))}{1+x(1+x \ln b + O(x^2))}\\ &amp;\approx (1+x(1+x \ln a + O(x^2)))(1-(x(1+x \ln b + O(x^2)))+x^2+O(x^2))\\ &amp;= (1+x+x^2 \ln a + O(x^3))(1-x-x^2 (\ln b-1) + O(x^3))\\ &amp;=1+x^2(\ln a-1 - (\ln b -1))+O(x^3)\\ &amp;=1+x^2(\ln(a/b))+O(x^3)\\ \end{array} $</p> <p>so, since $(1+ax)^{1/x} \approx e^{a} $ as $x \to 0$,</p> <p>$f(x) \approx (1+x^2(\ln(a/b))+O(x^3))^{1/x^2} \approx e^{\ln(a/b)} =\frac{a}{b} $.</p>
2,895,382
<p>Let $J_n=\{1,\dots,n\}$. How do I show that the set of all functions $J_n\to \mathbb N$ is countable? Any function is given by specifying the images of $1,\dots,n$. There are $|\mathbb N|$ options for the image of each $i=1, \dots, n$. So intuitively, the set of such functions is the union of $n$ copies of $\mathbb N$, hence countable. But how to formalize it?</p>
mvw
86,776
<p>Each function is given by a $n$-tuple $(f_1, \dotsc, f_n) \in \mathbb{N}^n$ of natural numbers, where $f_i = f(i)$.</p> <p>You can use the <a href="https://en.wikipedia.org/wiki/Pairing_function" rel="nofollow noreferrer">Cantor pairing function</a>, $$ \pi(x, y) := y + \sum_{i=0}^{x+y} i = y+\frac{1}{2} (x + y) (x + y + 1) $$ which is a bijection between $\mathbb{N}$ and $\mathbb{N}^2$, iteratively to encode an $n$-tuple to a natural number.</p> <p>E.g.</p> <ul> <li>$\langle f_1, f_2 \rangle = \pi(f_1, f_2)$</li> <li>$\langle f_1, f_2, f_3 \rangle = \langle f_1, \langle f_2, f_3 \rangle \rangle$</li> <li>$\langle f_1, f_2, f_3, f_4 \rangle = \langle f_1, \langle f_2, f_3, f_4 \rangle \rangle = \langle f_1, \langle f_2, \langle f_3, f_4 \rangle \rangle\rangle$</li> </ul> <p>and so on. This is called a Cantor tuple function.</p>
2,895,382
<p>Let $J_n=\{1,\dots,n\}$. How do I show that the set of all functions $J_n\to \mathbb N$ is countable? Any function is given by specifying the images of $1,\dots,n$. There are $|\mathbb N|$ options for the image of each $i=1, \dots, n$. So intuitively, the set of such functions is the union of $n$ copies of $\mathbb N$, hence countable. But how to formalize it?</p>
qualcuno
362,866
<p>Let $\mathcal{J_k} = \{f : J_k \to \mathbb{N} \}$ be the set of functions from $J_k$ to $\mathbb{N}$. Now, the mapping </p> <p>$$ \begin{align} &amp; \mathcal{J_k} \ \longrightarrow \ \mathbb{N}^k \\ &amp; \quad f \mapsto (f(1), \dots, f(k) ) \end{align} $$ </p> <p>is bijective, so it suffices to show that $\mathbb{N}^k$ is countable. Take distinct primes $p_1, \dots, p_k$ and consider the function</p> <p>$$ \Gamma: \mathbb{N^k} \to \mathbb{N} \\ (n_1, \dots, n_k) \mapsto p_1^{n_1}\cdots p_k^{n_k} $$ </p> <p>By the fundamental theorem of arithmetic, $\Gamma$ is injective, and so $|\mathbb{N}^k| \leq |\mathbb{N}| = \aleph_0$ as desired.</p> <p>If you want to have an explicit injection, you can compose both mappings, i.e. you can take the function $f \mapsto p_1^{f(1)}\cdots p_k^{f(k)}$.</p>
2,660,316
<p>$$\frac{\mathrm{d}}{\mathrm{d}y}\left(\frac 2{\sqrt{2\pi}}\int_0^{\sqrt y} \exp\left(-{\frac{x^2}{2}}\right) \,\mathrm{d}x\right).$$</p> <p>I try to integrate first and then do the differentiation but it's not easy. I want to know other way to do it. Thank you.</p>
Community
-1
<blockquote> <p><strong>Theorem (<a href="https://en.wikipedia.org/w/index.php?title=Legendre%27s_formula&amp;oldid=813253390" rel="nofollow noreferrer">Legendre's formula</a>):</strong> For any prime $p$ and natural number $n$, the $p$-adic valuation of $n!$ is given by $$ v_p(n!) = \sum_{i=1}^{\infty} \left\lfloor \frac{n}{p^i} \right\rfloor &lt; \frac{n}{p-1}$$</p> <p><strong>Corollary:</strong> For any $x \in \mathbf{C}_p$, if $|x|_p &lt; p^{-1/(p-1)}$, then $$\lim_{n \to \infty} \frac{x^n}{n!} = 0 $$</p> <p>In particular, if $p &gt; 2$, then $|p|_p = p^{-1} &lt; p^{-1/(p-1)}$.</p> </blockquote> <p>An infinite sum over the $p$-adics converges if and only if its terms converge $p$-adically to zero (prove this!), so this can be used to show convergence.</p> <hr> <p>To actually find the value of the sum, it turns out that the series you are considering has a closed form:</p> <p>$$ \sqrt{1 + x} = 1 + \sum_{n=1}^{\infty} \binom{1/2}{n} x^n $$</p> <p>Given this, all you need to do is to determine how many digits of precision you need to distinguish between $4$ and $-4$, and then determine at which point in the summation all of the remaining terms are insignificant.</p>
15,033
<p>I have noticed in <a href="https://meta.mathoverflow.net/questions/833/who-are-the-mathoverflow-moderators">this post</a> that at MO they have e-mail address <code>moderators@mathoverflow.net</code>, which can be used to contact moderators.</p> <p>Is there a similar address for moderators of this site? If not, would creating such e-mail (and putting contact information at some visible place) be useful?</p>
Willie Wong
1,543
<p>There is one significant difference between MathOverflow and us which makes it much, much more useful for them to have a moderators e-mail account. And that is the fact that the MathOverflow brand is <em>not owned by StackExchange</em> and that there is a <a href="https://meta.mathoverflow.net/questions/969/who-owns-mathoverflow/970#970">kill switch in the agreement</a> between MO and SE for hosting that allows MO to pack-up everything and leave if they so choose. The directors for the non-profit that runs MathOverflow <a href="https://meta.mathoverflow.net/a/935/3948">include the original moderators</a>. These all add up to it being very useful for them to have a channel of discussion, which is permanently archived, which is accessible by e-mail, and which is not through the StackExchange platform. </p> <p>In contrast, for Math.SE there is less call for having such a platform (in fact, I cannot think of a reason why we need to have a communications platform outside of the SE system). Currently, for discussion between moderators, we heavily rely on the Moderators only chatroom. This automatically archives all discussions, is somewhat searchable, and has the advantage that any and all newly elected moderators will immediately gain access to past discussions. (Compare this with an e-mail platform where the archive of discussion would largely reside within our own inboxes.) And for individual contact with users we have now come to almost exclusively rely upon the "Contact User" form in the SE network. Again, this method has the advantage of automatically archiving all communications and additionally increasing transparency (as all moderator messages will result in notifications being sent to all moderators). </p> <p>From personal experience having served as a moderator during the era <em>before</em> chat.SE came into being and when all moderator decisions are coordinated via e-mail, I have to say that </p> <ol> <li>The current set of tools given to us by SE has a small learning curve and does take a little getting used to...</li> <li>... but after the initial period I definitely much prefer the current system. </li> </ol> <hr> <p>That said, if you really must contact the moderators in private outside of the SE network: several of the moderators (yours truly included) have an e-mail address listed in the About Me section of their profiles. But note that we may <a href="http://meta.math.stackexchange.com/q/6820/1543">move the discussion to an SE-archived venue</a> if the situation calls for it. </p>
600,097
<p>I am stuck on the following problem from an exercise in my analysis book: </p> <blockquote> <p>Show that $$\int_0^4 x \mathrm d(x-[x])=-2$$ where $[x]$ is the greatest integer not exceeding $x$. </p> </blockquote> <p>I think I have to partition the interval $[0,4]$ into some suitable subintervals and here I see $x-[x] \ge 0$. </p> <p>But, I am not sure how to tackle it as no similar types of problems has been discussed in the book. Can someone explain? Thanks and regards to all.</p>
Ross Millikan
1,827
<p>You can define $u=x-[x]$, which will make the $d(x-[x])$ much more friendly. Then $x=u+[x]$ and it seems natural to split the integral into $[0,1],[1,2],[2,3],[3,4]$ On each interval you can see what to do with the $[x]$. That said, I am not getting $\frac 32$ for an answer.</p>
600,097
<p>I am stuck on the following problem from an exercise in my analysis book: </p> <blockquote> <p>Show that $$\int_0^4 x \mathrm d(x-[x])=-2$$ where $[x]$ is the greatest integer not exceeding $x$. </p> </blockquote> <p>I think I have to partition the interval $[0,4]$ into some suitable subintervals and here I see $x-[x] \ge 0$. </p> <p>But, I am not sure how to tackle it as no similar types of problems has been discussed in the book. Can someone explain? Thanks and regards to all.</p>
copper.hat
27,978
<p>If we let $f(x) = x -\lfloor x \rfloor$, then the Lebesgue Stieltjes measure corresponding to $f$ can be written as $\mu_f = m -\sum_n \delta_n$, where $m$ is the Lebesgue measure and $\delta_n$ is the Dirac measure concentrated at $n$.</p> <p>Then $\int_0^4 x d \mu_f = \int_0^4 xdx - \int_0^4 x d(\sum_n \delta_n) = 8-(0+1+2+3+4) = -2$.</p>
4,007,987
<p>So define a polynomial <span class="math-container">$P(x) = 4x^3 + 4x - 5 = 0$</span>, whose roots are <span class="math-container">$a, b $</span> and <span class="math-container">$c$</span>. Evaluate the value of <span class="math-container">$(b+c-3a)(a+b-3c)(c+a-3b)$</span></p> <p>Now tried this in two ways (both failed because it was far too messy)</p> <ol> <li><p>Expand everything out (knew this was definitely not the required answer but I cant think of the quick method myself)</p> </li> <li><p>Using sum and product (which is still quite lengthy but better)</p> </li> </ol> <p>Of course the above methods relied on the use of Vieta's sum/product of roots.</p> <p>Does anyone have an amazing concise solution for me? I know you guys are full of tricks, and I enjoy reading them.</p>
Z Ahmed
671,540
<p>We have <span class="math-container">$a+b+c=0, ab+bc+ca=1, abc=5/4$</span> Le <span class="math-container">$y=a+b-3c=a+b+c-4c=-4c \implies y=-4x$</span> Let us transform <span class="math-container">$x^3+4x-5=0$</span> bt <span class="math-container">$x=y/-4$</span> to get a the <span class="math-container">$y$</span> equation <span class="math-container">$4(y/-4)^3+4(-y/4)-5=0 \implies y^3+16y+80=0$</span> The y<span class="math-container">$y$</span> equation will have the required roots. So the product of roots is <span class="math-container">$y_1y_2y_3=-80.$</span></p>
2,441,359
<p>It's an example given in my book after monotone convergence theorem and dominated convergence theorem (without explanation) : </p> <p>Find an equivalent of $$\int_0^{\pi/2}\dfrac {dx} {\sqrt{\sin^2(x)+\epsilon \cos^2(x)}}$$</p> <p>when $\epsilon\to 0^{+}$.</p> <p>Inspired of the theorems, I naturally think of the sequence $(\epsilon_n)$ that converges to $0$, and it is monotone. However, the limit (1/sin(x)) doesn't converge in the integral (the other examples converge to a finite number...). Would someone give a hint about how to deal with the divergent case?</p>
Rob Arthan
23,171
<p>For a somewhat different example, let $P$ and $Q$ be propositional variables. Then intuitionistic propositional logic does not prove $(P \land \lnot\lnot Q) \to (\lnot\lnot P \land Q)$ (and hence, by symmetry, it does not prove $(Q \land \lnot\lnot P) \to (\lnot\lnot Q \land P)$). To see this note that if intuitionistic logic could prove $(P \land \lnot\lnot Q) \to (\lnot\lnot P \land Q)$, it could also prove $(\top \land \lnot\lnot Q) \to (\lnot\lnot \top \land Q)$, which it can prove equivalent to the law of double negation elimination, $\lnot\lnot Q \to Q$, which is not intuitionistically valid.</p>
108,200
<p>If you are to calculate the hypotenuse of a triangle, the formula is:</p> <p>$h = \sqrt{x^2 + y^2}$</p> <p>If you don't have any units for the numbers, replacing x and y is pretty straightforward: $h = \sqrt{4^2 + 6^2}$</p> <p>But what if the numbers are in meters?<br /> $h = \sqrt{4^2m + 6^2m}$ <em>(wrong, would become $\sqrt{52m}$)</em><br /> $h = \sqrt{4m^2 + 6m^2}$ <em>(wrong, would become $\sqrt{10m^2}$)</em><br /> $h = \sqrt{(4m)^2 + (6m)^2}$ <em>(correct, would become $\sqrt{52m^2}$)</em><br /></p> <p>Or should I just ignore the unit of measurement in these cases?</p>
Robin Stammer
14,037
<p>You just have to sort this a little bit. Let's assume we're talking about an closed Riemannian manifold $(M,g)$ with its Laplace-Beltrami-Operator $\Delta_g$. Then you have the heat kernel as fundamental solution of the heat equation:</p> <p>$$ \mathcal{K} \in C^{\infty}(M \times M \times \mathbb{R}^+)$$ Note that the notation $e^{-t\Delta}$ sometimes refers to both, the kernel itself and the heat operator, which is defined as integral operator acting on functions $f \in C^{\infty}(M)$ via integration against the heat kernel:</p> <p>$$e^{-t\Delta}: C^{\infty}(M) \rightarrow C^{\infty}(M)$$ $$e^{-t\Delta}(f)(x) := \int_M \mathcal{K}(t,x,y)f(y) \mathrm{dy}$$</p> <p>The heat operator can be expanded to an operator on $L^2(M)$. This allows to consider the heat trace, defined as the $L^2-$trace of the heat operator. For the heat trace, you have the identity:</p> <p>$$\text{Tr}_{L^2}(e^{-t\Delta}) = \int_M \mathcal{K}(t,x,x) \mathrm{dx}$$ </p> <p>which relates your objects to each other. Now you're interested in asymptotic expansions for $t \downarrow 0$ for the heat kernel and for the heat trace. </p> <p>$$ \mathcal{K}(x,x,t) \sim \sum\limits_{k=0}^{\infty} u(x,x) t^k $$ for some (very interesting) smooth maps $u(x,y)$ and the latter gives you the expansion for the trace of the heat operator by the above identity:</p> <p>$$ \text{Tr}_{L^2}(e^{-t\Delta}) = \int_M \mathcal{K}(t,x,x) \mathrm{dx} \sim \int_M \sum\limits_{k=0}^{\infty} u(x,x) t^k $$</p> <p>I hope this is still useful for you. A great book including this topic is <em>The Laplacian on a Riemannian manifold</em> by S.Rosenberg.</p>
3,739,911
<p><a href="https://i.stack.imgur.com/aSEt6.png" rel="nofollow noreferrer">question</a></p> <p><a href="https://i.stack.imgur.com/G6EHM.png" rel="nofollow noreferrer">options and answers</a></p> <p>The interval in which the function <span class="math-container">$f(x)=\sin(e^x)+\cos(e^x)$</span> is increasing is/are?</p> <p>I don't understand how to approach such problems. it would be helpful if you could kindly guide me through the process. i have also shared the options image and the correct answers have a green tick.</p>
zkutch
775,801
<p><span class="math-container">$$f^{'}(x)=e^x(\sin e^x - \cos e^x)&gt;0$$</span> gives (first letter in alphabet)</p>
3,347,391
<blockquote> <p>Find the maximum and minimum values of <span class="math-container">$x^2 + y^2 + z^2$</span> subject to the equality constraints <span class="math-container">$x + y + z = 1$</span> and <span class="math-container">$x y z + 1 = 0$</span></p> </blockquote> <p>My try:</p> <p>Let <span class="math-container">$u=x^2+y^2+z^2$</span> <span class="math-container">$$x+y+z-1=0$$</span> <span class="math-container">$$xyz+1=0$$</span> <span class="math-container">$$(xdx+ydy+zdz)+m(dx+dy+dz)+n(yzdx+xzdy+xydz)=0$$</span> <span class="math-container">$$x+m+yzn=0$$</span> <span class="math-container">$$y+m+xzn=0$$</span> <span class="math-container">$$z+m+xyn=0$$</span></p> <p>Multiplying by <span class="math-container">$x ,y$</span> and <span class="math-container">$z$</span> then adding above three equations i get $u+m+n=0.¢ What should i do after that.. please help me.. thanks in advance.</p>
Dr. Sonnhard Graubner
175,066
<p>Hint: With <span class="math-container">$$z=1-x-y$$</span> we get <span class="math-container">$$x^2+y^2+(1-x-y)^2$$</span> and the equation <span class="math-container">$$xy-x^2y-xy^2+1=0$$</span> Now you can eliminate <span class="math-container">$x$</span> or <span class="math-container">$y$</span>, and you will get a problem in one variable only.</p>
226,346
<p>I have the three dimensional Laplacian <span class="math-container">$\nabla^2 T(x,y,z)=0$</span> representing temperature distribution in a cuboid shaped wall which is exposed to two fluids flowing perpendicular to each other on either of the <span class="math-container">$z$</span> faces i.e. at <span class="math-container">$z=0$</span> (ABCD) and <span class="math-container">$z=w$</span> (EFGH). Rest all the faces are insulated i.e. <span class="math-container">$x=0,L$</span> and <span class="math-container">$y=0,l$</span>. The following figure depicts the situation.<a href="https://i.stack.imgur.com/T4kKK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T4kKK.png" alt="enter image description here" /></a></p> <p>The boundary conditions on the lateral faces are therefore:</p> <p><span class="math-container">$$-k\frac{\partial T(0,y,z)}{\partial x}=-k\frac{\partial T(L,y,z)}{\partial x}=-k\frac{\partial T(x,0,z)}{\partial y}=-k\frac{\partial T(x,l,z)}{\partial y}=0 \tag 1$$</span></p> <p>The bc(s) on the two z-faces are robin type and of the following form:</p> <p><span class="math-container">$$\frac{\partial T(x,y,0)}{\partial z} = p_c\bigg(T(x,y,0)-e^{-b_c y/l}\left[t_{ci} + \frac{b_c}{l}\int_0^y e^{b_c s/l}T(x,s,0)ds\right]\bigg) \tag 2$$</span></p> <p><span class="math-container">$$\frac{\partial T(x,y,w)}{\partial z} = p_h\bigg(e^{-b_h x/L}\left[t_{hi} + \frac{b_h}{L}\int_0^x e^{b_h s/L}T(x,s,w)ds\right]-T(x,y,w)\bigg) \tag 3$$</span></p> <p><span class="math-container">$t_{hi}, t_{ci}, b_h, b_c, p_h, p_c, k$</span> are all constants <span class="math-container">$&gt;0$</span>.</p> <p>I have two questions:</p> <p><strong>(1)</strong> With the insulated conditions mentioned in <span class="math-container">$(1)$</span> does a solution exist for this system?</p> <p><strong>(2)</strong> Can someone help in solving this analytically ? I tried to solve this using the following approach (separation of variables) but encountered the results which I describe below (in short I attain a <em>trivial solution</em>):</p> <p>I will include the codes for help:</p> <pre><code>T[x_, y_, z_] = (C1*E^(γ z) + C2 E^(-γ z))* Cos[n π x/L]*Cos[m π y/l] (*Preliminary T based on homogeneous Neumann x,y faces *) tc[x_, y_] = E^(-bc*y/l)*(tci + (bc/l)* Integrate[E^(bc*s/l)*T[x, s, 0], {s, 0, y}]); bc1 = (D[T[x, y, z], z] /. z -&gt; 0) == pc (T[x, y, 0] - tc[x, y]); ortheq1 = Integrate[(bc1[[1]] - bc1[[2]])*Cos[n π x/L]* Cos[m π y/l], {x, 0, L}, {y, 0, l}, Assumptions -&gt; {L &gt; 0, l &gt; 0, bc &gt; 0, pc &gt; 0, tci &gt; 0, n ∈ Integers &amp;&amp; n &gt; 0, m ∈ Integers &amp;&amp; m &gt; 0}] == 0 // Simplify th[x_, y_] = E^(-bh*x/L)*(thi + (bh/L)* Integrate[E^(bh*s/L)*T[s, y, w], {s, 0, x}]); bc2 = (D[T[x, y, z], z] /. z -&gt; w) == ph (th[x, y] - T[x, y, w]); ortheq2 = Integrate[(bc2[[1]] - bc2[[2]])*Cos[n π x/L]* Cos[m π y/l], {x, 0, L}, {y, 0, l}, Assumptions -&gt; {L &gt; 0, l &gt; 0, bc &gt; 0, pc &gt; 0, tci &gt; 0, n ∈ Integers &amp;&amp; n &gt; 0, m ∈ Integers &amp;&amp; m &gt; 0}] == 0 // Simplify soln = Solve[{ortheq1, ortheq2}, {C1, C2}]; CC1 = C1 /. soln[[1, 1]]; CC2 = C2 /. soln[[1, 2]]; expression1 := CC1; c1[n_, m_, L_, l_, bc_, pc_, tci_, bh_, ph_, thi_, w_] := Evaluate[expression1]; expression2 := CC2; c2[n_, m_, L_, l_, bc_, pc_, tci_, bh_, ph_, thi_, w_] := Evaluate[expression2]; γ1[n_, m_] := Sqrt[(n π/L)^2 + (m π/l)^2]; </code></pre> <p>I have used <code>Cos[n π x/L]*Cos[m π y/l]</code> considering the homogeneous Neumann condition on the lateral faces i.e. <span class="math-container">$x$</span> and <span class="math-container">$y$</span> faces.</p> <p>Declaring some constants and then carrying out the summation:</p> <pre><code>m0 = 30; n0 = 30; L = 0.025; l = 0.025; w = 0.003; bh = 0.433; bc = 0.433; ph = 65.24; \ pc = 65.24; thi = 120; tci = 30; Vn = Sum[(c1[n, m, L, l, bc, pc, tci, bh, ph, thi, w]* E^(γ1[n, m]*z) + c2[n, m, L, l, bc, pc, tci, bh, ph, thi, w]* E^(-γ1[n, m]*z))*Cos[n π x/L]*Cos[m π y/l], {n, 1, n0}, {m, 1, m0}]; </code></pre> <p>On executing an plotting at <code>z=0</code> using <code>Plot3D[Vn /. z -&gt; 0, {x, 0, L}, {y, 0, l}]</code> I get the following:</p> <p><a href="https://i.stack.imgur.com/HxIiR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HxIiR.jpg" alt="enter image description here" /></a></p> <p>which is basically 0. On looking further I found that the constants <code>c1, c2</code> evaluate to <code>0</code> for any value of <code>n,m</code>.</p> <p><strong>More specifically I would like to know if some limiting solution could be developed to circumvent the problem of the constants evaluating to zero</strong></p> <hr /> <p><strong>Origins of the b.c.</strong><span class="math-container">$2,3$</span></p> <p>Actual bc(s): <span class="math-container">$$\frac{\partial T(x,y,0)}{\partial z}=p_c (T(x,y,0)-t_c) \tag 4$$</span> <span class="math-container">$$\frac{\partial T(x,y,w)}{\partial z}=p_h (t_h-T(x,y,w))\tag 5$$</span></p> <p>where <span class="math-container">$t_h,t_c$</span> are defined in the equation:</p> <p><span class="math-container">$$\frac{\partial t_c}{\partial y}+\frac{b_c}{l}(t_c-T(x,y,0))=0 \tag 6$$</span> <span class="math-container">$$\frac{\partial t_h}{\partial x}+\frac{b_h}{L}(t_h-T(x,y,0))=0 \tag 7$$</span></p> <p><span class="math-container">$$t_h=e^{-b_h x/L}\bigg(t_{hi} + \frac{b_h}{L}\int_0^x e^{b_h s/L}T(x,s,w)ds\bigg) \tag 8$$</span></p> <p><span class="math-container">$$t_c=e^{-b_c y/l}\bigg(t_{ci} + \frac{b_c}{l}\int_0^y e^{b_c s/l}T(x,s,0)ds\bigg) \tag 9$$</span></p> <p>It is known that <span class="math-container">$t_h(x=0)=t_{hi}$</span> and <span class="math-container">$t_c(y=0)=t_{ci}$</span>. I had solved <span class="math-container">$6,7$</span> using the method of integrating factors and used the given conditions to reach <span class="math-container">$8,9$</span> which were then substituted into the original b.c.(s) <span class="math-container">$4,5$</span> to reach <span class="math-container">$2,3$</span>.</p> <hr /> <p><strong>Non-dimensional formulation</strong> The non-dimensional version of the problem can be written as:</p> <p>(In this section <span class="math-container">$x,y,z$</span> are non-dimensional; <span class="math-container">$x=x'/L,y=y'/l,z=z'/w, \theta=\frac{t-t_{ci}}{t_{hi}-t_{ci}}$</span>)</p> <p>Also, <span class="math-container">$\beta_h=h_h (lL)/C_h, \beta_c=h_c (lL)/C_c$</span> (However, this information might not be needed)</p> <p><span class="math-container">$$\lambda_h \frac{\partial^2 \theta_w}{\partial x^2}+\lambda_c \frac{\partial^2 \theta_w}{\partial y^2}+\lambda_z \frac{\partial^2 \theta_w}{\partial z^2}=0 \tag A$$</span></p> <p>In <span class="math-container">$(A)$</span> <span class="math-container">$\lambda_h=1/L^2, \lambda_c=1/l^2, \lambda_z=1/w^2$</span></p> <p><span class="math-container">$$\frac{\partial \theta_h}{\partial x}+\beta_h (\theta_h-\theta_w) = 0 \tag B$$</span></p> <p><span class="math-container">$$\frac{\partial \theta_c}{\partial y} + \beta_c (\theta_c-\theta_w) = 0 \tag C$$</span></p> <p>The z-boundary condition then becomes:</p> <p><span class="math-container">$$\frac{\partial \theta_w(x,y,0)}{\partial z}=r_c (\theta_w(x,y,0)-\theta_c) \tag D$$</span> <span class="math-container">$$\frac{\partial \theta_w(x,y,w)}{\partial z}=r_h (\theta_h-\theta_w(x,y,w))\tag E$$</span></p> <p><span class="math-container">$$\theta_h(0,y)=1, \theta_c(x,0)=0$$</span></p> <p>Here <span class="math-container">$r_h,r_c$</span> are non-dimensional quantities (<span class="math-container">$r_c=\frac{h_c w}{k}, r_h=\frac{h_h w}{k}$</span>).</p>
Bill Watts
53,121
<p>This is more of an extended comment than an answer, but it occurred to me that your solution is incomplete. You have a double <span class="math-container">$Cos$</span> series in <span class="math-container">$m$</span> and <span class="math-container">$n$</span>, and unlike <span class="math-container">$Sin$</span> series you should need <span class="math-container">$m=0$</span> and <span class="math-container">$n=0$</span> terms.</p> <p>You have computed your <span class="math-container">$T_{mn}$</span> series for <span class="math-container">$(m, n)$</span> going from <span class="math-container">$1$</span> to <span class="math-container">$\infty$</span> and it came out to be <span class="math-container">$0 $</span>. You need to add a <span class="math-container">$T_{00}$</span> term for <span class="math-container">$(m, n)=0$</span> and two more series.</p> <p>Add a <span class="math-container">$T_{m0}$</span> series for <span class="math-container">$n=0$</span> and <span class="math-container">$m$</span> going from <span class="math-container">$1$</span> to <span class="math-container">$\infty$</span> and a <span class="math-container">$T_{0n}$</span> series for <span class="math-container">$m=0$</span> and n going from <span class="math-container">$1$</span> to <span class="math-container">$\infty$</span>.</p> <p>It takes all four pieces to make a complete solution. I have not tried this on your problem yet, so I don't know if all the pieces will come out to be zero or not, but this will give you something else to try. Your solution would not be correct without all four pieces anyway.</p> <p>At the OP's request I will include my code, even though it doesn't work very well.</p> <pre><code>Clear[&quot;Global`*&quot;] $Assumptions = n ∈ Integers &amp;&amp; m ∈ Integers pde = D[T[x, y, z], x, x] + D[T[x, y, z], y, y] + D[T[x, y, z], z, z] == 0 T[x_, y_, z_] = X[x] Y[y] Z[z] pde = pde/T[x, y, z] // Expand x0eq = X''[x]/X[x] == 0 DSolve[x0eq, X[x], x] // Flatten X0 = X[x] /. % /. {C[1] -&gt; c1, C[2] -&gt; c2} xeq = X''[x]/X[x] == -α1^2 DSolve[xeq, X[x], x] // Flatten X1 = X[x] /. % /. {C[1] -&gt; c3, C[2] -&gt; c4} y0eq = Y''[y]/Y[y] == 0 DSolve[y0eq, Y[y], y] // Flatten Y0 = Y[y] /. % /. {C[1] -&gt; c5, C[2] -&gt; c6} yeq = Y''[y]/Y[y] == -β1^2 DSolve[yeq, Y[y], y] // Flatten Y1 = Y[y] /. % /. {C[1] -&gt; c7, C[2] -&gt; c8} z0eq = pde /. X''[x]/X[x] -&gt; 0 /. Y''[y]/Y[y] -&gt; 0 DSolve[z0eq, Z[z], z] // Flatten Z0 = Z[z] /. % /. {C[1] -&gt; c9, C[2] -&gt; c10} zeq = pde /. X''[x]/X[x] -&gt; -α1^2 /. Y''[y]/Y[y] -&gt; -β1^2 DSolve[zeq, Z[z], z] // Flatten Z1 = Z[z] /. % /. {C[1] -&gt; c11, C[2] -&gt; c12} // ExpToTrig // Collect[#, {Cosh[_], Sinh[_]}] &amp; Z1 = % /. {c11 - c12 -&gt; c11, c11 + c12 -&gt; c12} T[x_, y_, z_] = X0 Y0 Z0 + X1 Y1 Z1 (D[T[x, y, z], x] /. x -&gt; 0) == 0 c2 = 0; c4 = 0; T[x, y, z] c1 = 1 c3 = 1 (D[T[x, y, z], x] /. x -&gt; L) == 0 α1 = (n π)/L (D[T[x, y, z], y] /. y -&gt; 0) == 0 c6 = 0 c8 = 0 T[x, y, z] c5 = 1 c7 = 1 (D[T[x, y, z], y] /. y -&gt; l) == 0 β1 = (m π)/l Tmn[x_, y_, z_] = T[x, y, z] /. {c9 -&gt; 0, c10 -&gt; 0} T00[x_, y_, z_] = T[x, y, z] /. n -&gt; 0 /. m -&gt; 0 T00[x_, y_, z_] = % /. c9 -&gt; 0 /. c12 -&gt; c1200 Tm0[x_, y_, z_] = T[x, y, z] /. n -&gt; 0 Tm0[x_, y_, z_] = % /. {c10 -&gt; 0, c9 -&gt; 0, c11 -&gt; c11m0, c12 -&gt; c12m0} // PowerExpand T0n[x_, y_, z_] = T[x, y, z] /. m -&gt; 0 // PowerExpand T0n[x_, y_, z_] = % /. {c9 -&gt; 0, c10 -&gt; 0, c11 -&gt; c110n, c12 -&gt; c120n} pdetcmn = D[tcmn[x, y], y] + (bc/l)*(tcmn[x, y] - Tmn[x, y, 0]) == 0 DSolve[pdetcmn, tcmn[x, y], {x, y}] // Flatten tcmn[x_, y_] = tcmn[x, y] /. % /. C[1][x] -&gt; 0 pdetc00 = D[tc00[x, y], y] + (bc/l)*(tc00[x, y] - T00[x, y, 0]) == 0 DSolve[{pdetc00, tc00[x, 0] == tci}, tc00[x, y], {x, y}] // Flatten // Simplify tc00[x_, y_] = tc00[x, y] /. % pdetcm0 = D[tcm0[x, y], y] + (bc/l)*(tcm0[x, y] - Tm0[x, y, 0]) == 0 DSolve[pdetcm0, tcm0[x, y], {x, y}] // Flatten tcm0[x_, y_] = tcm0[x, y] /. % /. C[1][x] -&gt; 0 pdetc0n = D[tc0n[x, y], y] + (bc/l)*(tc0n[x, y] - T0n[x, y, 0]) == 0 DSolve[pdetc0n, tc0n[x, y], {x, y}] // Flatten tc0n[x_, y_] = tc0n[x, y] /. % /. C[1][x] -&gt; 0 pdethmn = D[thmn[x, y], x] + (bh/L)*(thmn[x, y] - Tmn[x, y, 0]) == 0 DSolve[pdethmn, thmn[x, y], {x, y}] // Flatten thmn[x_, y_] = thmn[x, y] /. % /. C[1][y] -&gt; 0 pdeth00 = D[th00[x, y], x] + (bh/L)*(th00[x, y] - T00[x, y, 0]) == 0 DSolve[{pdeth00, th00[0, y] == thi}, th00[x, y], {x, y}] // Flatten th00[x_, y_] = th00[x, y] /. % pdethm0 = D[thm0[x, y], x] + (bh/L)*(thm0[x, y] - Tm0[x, y, 0]) == 0 DSolve[pdethm0, thm0[x, y], {x, y}] // Flatten thm0[x_, y_] = thm0[x, y] /. % /. C[1][y] -&gt; 0 pdeth0n = D[th0n[x, y], x] + (bh/L)*(th0n[x, y] - T0n[x, y, 0]) == 0 DSolve[pdeth0n, th0n[x, y], {x, y}] // Flatten th0n[x_, y_] = th0n[x, y] /. % /. C[1][y] -&gt; 0 bc100 = Simplify[(D[T00[x, y, z], z] /. z -&gt; 0) == pc*(T00[x, y, 0] - tc00[x, y])] orth100 = Integrate[bc100[[1]], {y, 0, l}, {x, 0, L}] == Integrate[bc100[[2]], {y, 0, l}, {x, 0, L}] bc200 = Simplify[(D[T00[x, y, z], z] /. z -&gt; w) == ph*(th00[x, y] - T00[x, y, w])] orth200 = Integrate[bc200[[1]], {y, 0, l}, {x, 0, L}] == Integrate[bc200[[2]], {y, 0, l}, {x, 0, L}] sol00 = Solve[{orth100, orth200}, {c10, c1200}] // Flatten // Simplify c10 = c10 /. sol00 c1200 = c1200 /. sol00 T00[x, y, z] tc00[x, y] th00[x, y] bc1m0 = Simplify[(D[Tm0[x, y, z], z] /. z -&gt; 0) == pc*(Tm0[x, y, 0] - tcm0[x, y])] orth1m0 = Integrate[bc1m0[[1]]*Cos[(m*Pi*y)/l], {y, 0, l}, {x, 0, L}] == Integrate[bc1m0[[2]]*Cos[(m*Pi*y)/l], {y, 0, l}, {x, 0, L}] bc2m0 = Simplify[(D[Tm0[x, y, z], z] /. z -&gt; w) == ph*(thm0[x, y] - Tm0[x, y, w])] orth2m0 = Integrate[bc2m0[[1]]*Cos[(m*Pi*y)/l], {y, 0, l}, {x, 0, L}] == Integrate[bc2m0[[2]]*Cos[(m*Pi*y)/l], {y, 0, l}, {x, 0, L}] solm0 = Solve[{orth1m0, orth2m0}, {c11m0, c12m0}] // Flatten // Simplify bc10n = (D[T0n[x, y, z], z] /. z -&gt; 0) == pc*(T0n[x, y, 0] - tc0n[x, y]) orth10n = Integrate[bc10n[[1]]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] == Integrate[bc10n[[2]]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] bc20n = Simplify[(D[T0n[x, y, z], z] /. z -&gt; w) == ph*(th0n[x, y] - T0n[x, y, w])] orth20n = Integrate[bc20n[[1]]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] == Integrate[bc20n[[2]]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] sol0n = Solve[{orth10n, orth20n}, {c110n, c120n}] // Flatten // Simplify bc1mn = (D[Tmn[x, y, z], z] /. z -&gt; 0) == pc*(Tmn[x, y, 0] - tcmn[x, y]) orth1mn = Integrate[bc1mn[[1]]*Cos[(m*Pi*y)/l]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] == Integrate[bc10n[[2]]*Cos[(m*Pi*y)/l]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] bc2mn = Simplify[(D[Tmn[x, y, z], z] /. z -&gt; w) == ph*(thmn[x, y] - Tmn[x, y, w])] orth2mn = Integrate[bc2mn[[1]]*Cos[(m*Pi*y)/l]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] == Integrate[bc2mn[[2]]*Cos[(m*Pi*y)/l]*Cos[(Pi*n*x)/L], {y, 0, l}, {x, 0, L}] solmn = Solve[{orth1mn, orth2mn}, {c11, c12}] // Flatten // Simplify </code></pre> <p>All zeros except T00, and that solution does not satisfy the bc's. Have fun</p> <p><strong>Update for new bc's</strong> This is too numerically unstable to get to work, but this is what I did.</p> <pre><code>Clear[&quot;Global`*&quot;] pde = D[T[x, y, z], x, x] + D[T[x, y, z], y, y] + D[T[x, y, z], z, z] == 0 $Assumptions = n ∈ Integers &amp;&amp; m ∈ Integers &amp;&amp; l &gt; 0 &amp;&amp; w &gt; 0 &amp;&amp; L &gt; 0 </code></pre> <p>Case 1</p> <p>x = 0, T = thi</p> <p>x = L, dT/dx = 0</p> <p>y = 0, T = 0</p> <p>y = l, dT/dy = 0 Use exponential in x, sinusoidal in y and z. Start with</p> <pre><code>T[x_, y_, z_] = (c1 + c2 x) (c10 z + c9) (c5 + c6 y) + (c3 Cosh[Sqrt[α1^2 + β1^2] x] + c4 Sinh[Sqrt[α1^2 + β1^2] x]) (c7 Cos[α1 y] + c8 Sin[α1 y]) (c11 Sin[β1 z] + c12 Cos[β1 z]) T[0, y, z] == thi (D[T[x, y, z], x] /. x -&gt; L) == 0 c2 = 0 Solve[(c3 Sqrt[α1^2 + β1^2]Sinh[L Sqrt[α1^2 + β1^2]] + c4 Sqrt[α1^2 + β1^2] Cosh[L Sqrt[α1^2 + β1^2]]) == 0, c4] // Flatten c4 = c4 /. % c3 = 1 c1 = 1 </code></pre> <p>Manually expand the Tanh and incorporate the (constant) common denominator with the other constants</p> <pre><code>Simplify[Cosh[L*Sqrt[α1^2 + β1^2]]*Cosh[x*Sqrt[α1^2 + β1^2]] - Sinh[L*Sqrt[α1^2 + β1^2]]*Sinh[x*Sqrt[α1^2 + β1^2]]] T[x_, y_, z_] = T[x, y, z] /. (Cosh[x Sqrt[α1^2 + β1^2]] - Tanh[L Sqrt[α1^2 + β1^2]] Sinh[ x Sqrt[α1^2 + β1^2]]) -&gt; % T[x, 0, z] == 0 c5 = 0 c7 = 0 c6 = 1 c8 = 1 Simplify[D[T[x, y, z], y] /. y -&gt; l] == 0 c10 = 0 c9 = 0 α1 = ((2 n + 1) π)/(2 l) </code></pre> <p>Set</p> <pre><code>β1 = ((2 m + 1) π)/(2 w) T1[x_, y_, z_] = T[x, y, z] </code></pre> <p>Case 2</p> <p>x = 0, T = 0</p> <p>x = L, dT/dx = 0</p> <p>y = 0, T = tci</p> <p>y = l, dT/dy = 0</p> <p>Use exponential in x, sinusoidal in y and z and flip the y and z terms</p> <pre><code>T2[x_, y_, z_] = Sin[(π (2 n + 1) x)/(2 L)] (c112 Sin[(π (2 m + 1) z)/(2 w)] + c122 Cos[(π (2 m + 1) z)/(2 w)]) Cosh[(l - y) Sqrt[(π^2 (2 n + 1)^2)/(4 L^2) + (π^2 (2 m + 1)^2)/(4 w^2)]] T[x_, y_, z_] = T1[x, y, z] + T2[x, y, z] pdeth = D[th[x, y], x] + (bh/L)*(th[x, y] - T[x, y, w]) == 0 DSolve[{pdeth, th[0, y] == thi}, th[x, y], {x, y}] // Flatten // Simplify th[x_, y_] = th[x, y] /. % // Simplify pdetc = Simplify[D[tc[x, y], y] + (bc/l)*(tc[x, y] - T[x, y, 0]) == 0] DSolve[{pdetc, tc[x, 0] == tci}, tc[x, y], {x, y}] // Flatten // Simplify tc[x_, y_] = tc[x, y] /. % bc1 = T[0, y, z] == thi bc2 = T[x, 0, z] == tci bc3 = Simplify[(D[T[x, y, z], z] /. z -&gt; 0) == pc*(T[x, y, 0] - tc[x, y])] bc4 = Simplify[(D[T[x, y, z], z] /. z -&gt; w) == ph*(th[x, y] - T[x, y, w])] bc1eq = Simplify[Integrate[(bc1[[1]] - bc1[[2]])*Sin[(Pi*(2*n + 1)*y)/(2*l)]*Sin[(Pi*(2*m + 1)*z)/(2*w)], {z, 0, w}, {y, 0, l}] == 0] bc2eq = Simplify[Integrate[(bc2[[1]] - bc2[[2]])*Sin[(Pi*(2*n + 1)*x)/(2*L)]*Sin[(Pi*(2*m + 1)*z)/(2*w)], {z, 0, w}, {x, 0, L}] == 0] bc3eq = Integrate[bc3[[1]]*Sin[(Pi*(2*n + 1)*y)/(2*l)]*Sin[(Pi*(2*n + 1)*x)/(2*L)], {y, 0, l}, {x, 0, L}] == 0 bc4eq = Integrate[bc4[[1]]*Sin[(Pi*(2*n + 1)*y)/(2*l)]*Sin[(Pi*(2*n + 1)*x)/(2*L)], {y, 0, l}, {x, 0, L}] == 0 Solve[bc1eq, c12] // Flatten // Simplify c12 = c12 /. % Solve[bc2eq, c122] // Flatten // Simplify c122 = c122 /. % Solve[bc4eq, c112] // Flatten; c112 = c112 /. % Solve[bc3eq, c11] // Flatten; c11 = c11 /. % values = {L -&gt; 1/40, l -&gt; 1/40, w -&gt; 3/1000, bh -&gt; 433/1000, bc -&gt; 433/1000, ph -&gt; 6524/100, pc -&gt; 6524/100, thi -&gt; 120, tci -&gt; 30}; C11 = Table[c11 /. values, {m, 0, 10}, {n, 0, 10}] // N[#, 50] &amp; C11 = Re[C11] </code></pre> <p>To get rid of the small imaginary component. <code>Chop</code> wipes out the real part also.</p> <pre><code>C12 = Table[c12 /. values, {m, 0, 11}, {n, 0, 11}] // N[#, 50] &amp; C12 = Re[C12] C112 = Table[c112 /. values, {m, 0, 11}, {n, 0, 11}] // N[#, 50] &amp; C112 = Re[C112] C122 = Table[c122 /. values, {m, 0, 11}, {n, 0, 11}] // N[#, 50] &amp; C122 = Re[C122] </code></pre> <p>Put it all together</p> <pre><code>T[x_, y_, z_] := Sum[Sin[(Pi*(2*n + 1)*y)/(2*l)]*(C11[[m + 1,n + 1]]*Sin[(Pi*(2*m + 1)*z)/(2*w)] + C12[[m + 1,n + 1]]*Cos[(Pi*(2*m + 1)*z)/(2*w)])* Cosh[(L - x)*Sqrt[(Pi^2*(2*n + 1)^2)/(4*l^2) + (Pi^2*(2*m + 1)^2)/(4*w^2)]] + Sin[(Pi*(2*n + 1)*x)/(2*L)]* Cosh[(l - y)*Sqrt[(Pi^2*(2*n + 1)^2)/(4*L^2) + (Pi^2*(2*m + 1)^2)/(4*w^2)]]*(C112[[m + 1,n + 1]]*Sin[(Pi*(2*m + 1)*z)/(2*w)] + C122[[m + 1,n + 1]]*Cos[(Pi*(2*m + 1)*z)/(2*w)]), {m, 0, 10}, {n, 0, 10}] </code></pre> <p>It took my computer days to compute all this and the values are way off. m,n of 10,10 are not enough terms, but I am not going any further. The values are still changing dramatically from m,n 9,10 to 10,10. Maybe the solution is wrong, or 50 decimals places is not enough, or it will take many more terms and many more days to even test the solution properly. Maybe your computer can do it faster, but my computer is 4 Ghz Intel i7 processor with 32 GB ram, so it is not a slow computer. Good luck.</p>
1,595,658
<p>$$ \text { Given the function: }f:\mathcal{N^+} \to \mathcal{N^+} where f \left(k\right) = \sum_{i=0}^k \,4^i. $$ </p> <p>Examining the prime factorizations of f(k) for k= 1...48, many factors appear in a regular pattern. </p> <p>QUESTION: </p> <ol> <li>Is there a proof that these patterns continue for larger values of k?</li> <li>Is there a known use for these patterns in number theory?</li> </ol>
Slade
33,433
<p>We have $f(k) = \frac{1}{3} (4^{k+1}-1)$.</p> <p>Let $p\neq 2,3$ be prime. By Fermat's Little Theorem, if $p \mid f(k)$, then $p\mid f(k+p-1)$. So there should be lots of patterns among divisors of the $f(k)$.</p> <p>For example, we have $5\mid f(1+4k)$ for $k=0,1,2,\ldots$</p>
1,595,658
<p>$$ \text { Given the function: }f:\mathcal{N^+} \to \mathcal{N^+} where f \left(k\right) = \sum_{i=0}^k \,4^i. $$ </p> <p>Examining the prime factorizations of f(k) for k= 1...48, many factors appear in a regular pattern. </p> <p>QUESTION: </p> <ol> <li>Is there a proof that these patterns continue for larger values of k?</li> <li>Is there a known use for these patterns in number theory?</li> </ol>
Robert Israel
8,508
<p>$$f(k) = \sum_{i=0}^k 4^i = \dfrac{4^{k+1}-1}{3}$$ If $d$ is coprime to $2$ and $3$, then $d$ divides $f(k)$ if and only if $4^{k+1} \equiv 1 \mod d$, i.e. iff $k+1$ is a multiple of the order of $4$ in the multiplicative group $U_d$ of units in $\mathbb Z/d\mathbb Z$. For example, the order of $4$ in $U_7$ is $3$ (i.e. $4^3 = 64 \equiv 1 \mod 7$ but $4^1$ and $4^2 \not\equiv 1 \mod 7$, so $f(k)$ is divisible by $7$ whenever $k-1$ is divisible by $3$.</p>
2,406,107
<p><a href="https://i.stack.imgur.com/ccIr4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ccIr4.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/zCPUz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zCPUz.png" alt="enter image description here"></a></p> <p>What set or relation does y,x belong to?</p> <p>What set or thing does w belong to?</p> <p>What set or thing does z belong to?</p> <p>I have a hard time keeping track what set w,z,y are members of, are part of whose domain and codomain. What are all the codomains and domains here?</p> <p>How do we formally define using set builder notation R,S, and T?</p>
Berci
41,488
<p>There is an unnamed set $X$ here, and $R,S,T$ are all binary relations on this set, i.e. $R,S,T\subseteq X\times X$.</p> <p>The notation $\ x\,R\,y\ $ for a relation $R$ with this notation means nothing else but $(x,y)\in R$.</p>
2,306,570
<p>I'm new to this topic and trying to solve system of equations over the field $Z_{3}$: $$\begin{array}{rcr} x+2z &amp; = &amp; 1 \\ y+2z &amp; = &amp; 2 \\ 2x+z &amp; = &amp; 1 \end{array}$$</p> <p>I solved the system but I have roots: $$x=1/3, y=4/3, z=1/3$$ and it's probably not right. Can you help with this one?</p>
Ofek Gillon
230,501
<p>$$\begin{pmatrix} -4&amp; 3\\ 7&amp; -5 \end{pmatrix} X = \begin{pmatrix} 2&amp;2\\-2&amp;2 \end{pmatrix} $$ The inverse matrix of $\begin{pmatrix} a&amp;b\\c&amp;d \end{pmatrix}$ is $\frac{1}{ad-bc} \begin{pmatrix} d&amp; -b\\ -c&amp; a \end{pmatrix}$ meaning the inverse of $\begin{pmatrix} -4&amp; 3\\ 7&amp; -5 \end{pmatrix}$ is $$\begin{pmatrix} -4&amp; 3\\ 7&amp; -5 \end{pmatrix} ^{-1} = -\begin{pmatrix} -5&amp; -3\\ -7&amp; -4 \end{pmatrix} = \begin{pmatrix} 5&amp; 3\\ 7&amp; 4 \end{pmatrix} $$ Let's multiply both sides of the equation with the inverse from the left: $$\begin{pmatrix} 5&amp; 3\\ 7&amp; 4 \end{pmatrix} \begin{pmatrix} -4&amp; 3\\ 7&amp; -5 \end{pmatrix} X = \begin{pmatrix} 5&amp; 3\\ 7&amp; 4 \end{pmatrix} \begin{pmatrix} 2&amp;2\\-2&amp;2 \end{pmatrix} $$ $$ X = \begin{pmatrix} 4&amp; 16\\ 6&amp; 22 \end{pmatrix} $$</p>
440,844
<p>Suppose we have a linear map $A \colon V \to V$ on a finite- dimensional vector space, and $W \leq V$ it's invariant subspace. Then we have obviously $\operatorname{Ker} A + W \subseteq A^{-1}(W)$.</p> <p>Is it then necessary $\operatorname{Ker} A + W = A^{-1}(W)$ ?</p> <p>I can prove it in case $A$ is a projector. How to prove it in general? Or is there a counteexample?</p>
Brian Rushton
51,970
<p>What would an inverse look like? For every element $y$ of $Y$, it would send $y$ to an element of $X$ in the preimage of $Y$. You can just make up such a map by choosing $g(y)$ to be any point in $f^{-1}(y).$ </p>
53,698
<p>A point moves from point A to point B. Both points are known, and so is the distance between them.</p> <p>It starts with a known speed of V<sub>A</sub>, accelerate (with known constant acceleration a) reaching V<sub>X</sub> (unknown), then start decelerating (with known constant acceleration -a) until it reach the final point B with a known speed of V<sub>B</sub>.</p> <p>So we know : V<sub>A</sub>, V<sub>B</sub>, a, and distance between A and B.</p> <p>How to find V<sub>X</sub> ?</p> <p><img src="https://i.stack.imgur.com/VhPtk.jpg" alt="enter image description here"></p>
Henry
6,460
<p><strong>Hint:</strong> You can calculate the time to reach a speed of $V_x$ and a further time to reach $V_B$, so you can find the total distance traveled as a function of $V_x$. </p> <p>Set this distance equal to $|B-A|$ and solve for $V_x$.</p>
53,698
<p>A point moves from point A to point B. Both points are known, and so is the distance between them.</p> <p>It starts with a known speed of V<sub>A</sub>, accelerate (with known constant acceleration a) reaching V<sub>X</sub> (unknown), then start decelerating (with known constant acceleration -a) until it reach the final point B with a known speed of V<sub>B</sub>.</p> <p>So we know : V<sub>A</sub>, V<sub>B</sub>, a, and distance between A and B.</p> <p>How to find V<sub>X</sub> ?</p> <p><img src="https://i.stack.imgur.com/VhPtk.jpg" alt="enter image description here"></p>
André Nicolas
6,312
<p>We write down some equations, in a semi-mechanical way, and then solve them.<br> Let $D$ be the (known) total distance travelled. </p> <p>It is natural to introduce some additional variables. The intuition probably works best if we use time. So let $s$ be the length of time that we accelerated, and $t$ the length of time that we decelerated.</p> <p>The (average) acceleration is the change in velocity, divided by elapsed time. Thus by looking at the acceleration and deceleration phases separately, we obtain the equations</p> <p>$$a=\frac{V_X-V_A}{s} \qquad \text{and}\qquad a=\frac{V_X-V_B}{t}\qquad\qquad \text{(Equations 1)}$$</p> <p>The net displacement while accelerating under constant acceleration is the average velocity times elapsed time. So while accelerating we covered a distance $(1/2)(V_X+V_A)s$. In the same way, we can see that the total distance covered while decelerating is $(1/2)(V_X+V_B)t$. But the total distance covered was $D$. If we multiply by $2$ to clear fractions, we obtain $$(V_X+V_A)s +(V_X+V_B)t=2D \qquad\qquad \text{(Equation 2)}$$</p> <p>Note that from Equations 1 we have $$s=\frac{V_X-V_A}{a} \qquad \text{and}\qquad t=\frac{V_X-V_A}{a}\qquad\qquad \text{(Equations 3)}$$ Substitute for $s$ and $t$ in Equation 2, and multiply through by $a$ to clear denominators. We obtain $$(V_X+V_A)(V_X-V_A) +(V_X+V_B)(V_X-V_B)=2aD.$$ The left-hand side simplifies to $2V_X^2 -V_A^2-V_B^2$. We conclude that $$2V_X^2=2aD+V_A^2+V_B^2$$ and therefore $$V_X=\sqrt{aD+(V_A^2+V_B^2)/2}.$$</p> <p><strong>Comment</strong>: The algebra turned out to be pretty simple. With other choices of variable, it might have seemed more complicated. An important simplifying device was to choose notation that treats the acceleration phase and the deceleration phase <strong>symmetrically.</strong> That saved half the work. Moreover, it was the preserved symmetry that made the equations "clean" and easy to work with. Symmetry is your friend. </p>
3,911,221
<p>I am working on a probability exercice and I am trying to calculate E(Y) which comes down to this expression :</p> <p><span class="math-container">$$ E(Y) = \int_{-∞}^{+∞} y\frac{e^{-y}}{(1+e^{-y})^{2}} \, \mathrm{d}y $$</span></p> <p>I tried to use integrals by part but it diverges and I can't find a good change of variables.</p> <p><strong>Any idea ?</strong></p>
PierreCarre
639,238
<p>This density is an even function, and the first moment integrand is an odd function. The expected value is then zero.</p> <p>If we denote <span class="math-container">$f(y)=\dfrac{e^{-y}}{(1+e^{-y})^2}$</span>, you can see that <span class="math-container">$$ f(-y)=\frac{e^y}{(1+e^y)^2} = \frac{e^{2y} \cdot e^{-y}}{e^{2y}(e^{-y}+1)} = f(y) $$</span></p> <p>and if you interpret the improper integral as <span class="math-container">$$ \int_{\mathbb{R}} y f(y) dy = \lim_{a \to +\infty} \int_{-a}^a y f(y) dy $$</span></p> <p>you see that its value must be zero.</p>
1,782,558
<p><strong>Problem</strong></p> <p>I have two differential equations</p> <p>$ \frac{dx}{dt} + \frac{dy}{dt} + x + y = 0$</p> <p>$ 2 \frac{dx}{dt} + \frac{dy}{dt} + x = 0 $</p> <p>initial conditions: $y(0) = 1$ and $x(0) = 0$</p> <p><strong>Attempt</strong></p> <p>I've solved the system via the Matrix method of setting determinant to $0$, and I got $w = -1$ --> a repeated root.</p> <p>I'm not sure why I have to use the extra $t$ factor</p> <p>$x = (At + B) e^{-t}$ </p> <p>$y = (Ct + D) e^{-t} $ </p> <p>to solve these equations. </p> <p>In addition when I tried using the extra factor, using the initial conditions, I just got $0 = A\cdot 0+ B$ and $1= C\cdot 0 + D$, which I can't use to find $C$ and $A$. I found $B$ and $D$ to be $0$ and $1$ respectively.</p>
jdods
212,426
<p>As noted in the other solution, the matrix method is overkill, but since you're interested, here it is.</p> <p>Your matrix system is $\mathbf{x}'=A\mathbf{x}$ where $$A=\left(\begin{matrix} 0 &amp; 1 \\ -1 &amp; -2 \end{matrix}\right)$$</p> <p>The eigenvalue is $-1$ with a multiplicity of $2$. The first eigenvector is $\xi=(-1,1)^T$ but we have to find a <strong>generalized eigenvector</strong> $\eta$ such that $(A-\lambda I)\eta=\xi$. We find that $\xi=(0,-1)^T$ works. </p> <p>The solutions are $\mathbf{x}_1(t)=\mathbf{\xi}e^{-t}$ and $\mathbf{x}_1(t)=\mathbf{\xi}te^{-t}+\mathbf{\eta}e^{-t}$. This is a standard technique.</p> <p>The solution with repeated roots is:</p> <p>$$ \mathbf{x}(t)= c_1\left(\begin{matrix} -1 \\ 1 \end{matrix}\right)e^{-t}+ c_2\left(\begin{matrix} -1 \\ 1 \end{matrix}\right)te^{-t}+ c_2\left(\begin{matrix} 0 \\ -1 \end{matrix}\right)e^{-t} $$</p> <p>Using your initial conditions give $c_2=-1$ and $c_1=0$.</p> <p><strong>Regarding the extra $t$:</strong> First, the extra $t$ is required because of the repeated root. In order to capture the entire set of possible solutions (to the matrix equation), we need two linearly independent solutions: $x=x_0e^{-t}$ and $y=y_0e^{-t}$ won't work since they are essentially the same (save for different initial values).</p> <p>Here is how I prefer to understand it. Two homogeneous first order equations can always be turned into a single homogeneous second order equation:</p> <p>$$\mathbf{x}'=A\mathbf{x}$$ with $\mathbf{x}=(x,y)^T$ becomes $$x''-\text{tr}(A)x'+\text{det}(A)x=0$$</p> <p>where tr$(A)$ is the trace of matrix $A$ and det$(A)$ is the determinant.</p> <p>The second order equation gives a characteristic equation: $r^2-\text{tr}(A)r+\text{det}(A)=0$ which gives roots $r_1$ and $r_2$. These give two solutions $x_i=e^{r_it}$ for $i=1,2$. Since the equation is second order, we <em>need</em> two linearly independent solutions (there is theory to back this up). We construct the general solution: $x=c_1x_1+c_2x_2$ then solve for the $c_i$ coefficients using initial conditions.</p> <p>However, when $r_1=r_2=r$, it just so happens that $x_1=e^{rt}$ and $x_2=te^{rt}$ work. In higher order equations, you simply multiply by powers of $t$ enough times to make sure you get the required number of solutions corresponding to the multiplicity of the root (if $r=1$ has multiplicity $3$, then $x_k=t^{k-1}e^t$ are solutions for $k=1,2,3$).</p>
1,801,946
<p>I need to find the equation of tangent line passing $(2,3)$ and perpendicular to $3x+4y=8$. Need help in this and also show me how you got the answer. I will be very thankful.</p>
peter.petrov
116,591
<p>The equation is: $-4x + 3y = a$ </p> <p>By taking $(-4,3)$ (in front of $x$ and $y$) you make the line<br> perpendicular to the given one which has $(3,4)$. </p> <p>You determine the $a$ by putting $(2,3)$ in there.<br> You get: $-4x + 3y = 1$ </p>
2,300,049
<p>these are my toughs:</p> <p>$$z^2 = 1 + 2i \Longrightarrow (x+yi)(x+yi) = 1 + 2i$$</p> <p>so: $x^2-y^2 = 1$ and $2xy = 2$</p> <p>then i got that $x = 1/y$ but i cant continue to find the real- and imaginary part of z anymore. Appriciated any help</p>
Rodrigo Dias
375,952
<p>In general, we have this</p> <p><strong>Lemma:</strong> If $z=a+ib \in \mathbb{C}$, with $a,b\in\mathbb{R}$, then $$ w = \sqrt{\frac{|z|+a}{2}} + i\epsilon\sqrt{\frac{|z|-a}{2}},$$ where $\epsilon =\pm 1$ according to $b=\epsilon|b|$, satisfies $w^2 = z$.</p> <p><strong>Proof:</strong> Let $w=x+iy$ satisfying $w^2=z$. Then $x^2 - y^2+2xyi =a+bi$. This equation is equivalent to the system $$ x^2 - y^2 = a \text{ ; } 2xy = b.$$ Since $w^2 = z$, we have $x^2+y^2 = |z|$ too, and we can conclude that $$x^2 = \frac{|z|+a}{2} \text{ ; } y^2 = \frac{|z|-a}{2}.$$ Choosing the positive square roots, we can write $$x = \sqrt{\frac{|z|+a}{2}} \text{ ; } y = \epsilon\sqrt{\frac{|z|-a}{2}}$$ satisfying $2xy = b$, i.e., $\epsilon = 1$ if $b &gt; 0$ and $\epsilon = -1$ if $b &lt; 0$. </p>
4,385,676
<blockquote> <p>Let <span class="math-container">$Y_n$</span> be a sequence of non-negative i.i.d random variables with <span class="math-container">$EY_n = 1$</span> and <span class="math-container">$P(Y_n = 1) &lt; 1$</span>. Consider the martingale process formed by <span class="math-container">$X_n = \prod_{k=1}^n Y_k$</span>. Use the martingale convergence theorem to show that <span class="math-container">$X_n \to 0$</span> almost surely.</p> </blockquote> <p>I see that the Martingale convergence theorem says that <span class="math-container">$X_n \to X$</span> almost surely with <span class="math-container">$E \lvert X \rvert &lt; \infty$</span>.</p> <p>I don't see how to reach the conclusion that <span class="math-container">$X = 0$</span> or <span class="math-container">$X_n \to 0$</span>.</p> <p>I see we can prove that <span class="math-container">$E \lvert X_n \rvert &lt; \infty$</span> and that <span class="math-container">$X_n$</span> is uniformly integrable and <span class="math-container">$X_n \to X$</span> in <span class="math-container">$L^1$</span>. And that <span class="math-container">$X_n = E(X \mid \mathcal{F}_n)$</span>.</p>
Snoop
915,356
<p>Since <span class="math-container">$X_n$</span> is a positive martingale, it is also a supermartingale bounded below by <span class="math-container">$0$</span>, therefore <span class="math-container">$X_n\to X_\infty$</span> a.s. by supermartingale convergence. Now consider that <span class="math-container">$P(|Y_n-1|&gt;\varepsilon \textrm{ i.o.})=1$</span> by Borel-Cantelli II. This implies that <span class="math-container">$X_\infty=0$</span> is the only admissible limit rv so that <span class="math-container">$X_n \to 0$</span> a.s.</p>
1,809,017
<p>Let $U$ be an open set containing $0$ and $f:U \rightarrow C$ a holomorphic function such that $f(0)=0$ and $f^{'}(0)=2$.Prove that there exists an open neighbourhood $0 \in V \subset U $ and a holomorphic injective function $h:V \rightarrow V$ such that $h(f(z))=2h(z)$. Since I don't have any idea where to start, I'd appreciate a small hint rather then a full solution. Thank you for all your answers.</p>
Doug M
317,162
<p>Should the minus sign be there. Probably. If it is.</p> <p>$x^2 + 4y^2 - (2-z)^2 \le 0$ is a double cone, with an elliptical cross section.</p> <p>$z^2\ge 0$ is a trivial statement.</p> <p>$z^2\le 2$ is the space between 2 planes. $-\sqrt{2} \le z \le \sqrt{2}$ and $-\sqrt{2}, \sqrt{2}$ are both below the vertex of the cone.</p> <p>we have a fustrum.</p> <p>lower base:</p> <p>$x^2 + 4y^2 = (2+\sqrt 2)^2\\ A_l = \frac 12 \pi (2+\sqrt 2)^2$</p> <p>upper base:</p> <p>$x^2 + 4y^2 = (2-\sqrt 2)^2 A_u = \frac 12 \pi (2-\sqrt 2)^2$</p> <p>$V =$$ \frac 13 (A_l (2+\sqrt 2) - A_u(2-\sqrt 2))\\ \frac 16 \pi [(2+\sqrt2)^3 - (2-\sqrt 2)^3)$</p> <p>which is greater than $\frac {4 \pi}{3}$ (it is closer to $\frac {20 \pi}{3}$)</p> <p>If you want to use calculus.</p> <p>$x = r cos t\\ y = \frac 12 r cos t\\ z = z\\ dz\,dy\,dx = \frac 12 r \,dz\,dr\,dt$</p> <p>$x^2 + 4y^2 - (2-z)^2 \le 0$ becomes</p> <p>$r^2 \le (2-z)^2\\ r \le (2-z)$</p> <p>$\int_0^{2\pi}\int_{-\sqrt{2}}^{\sqrt{2}}\int_0^{2-z} \frac 12 r \,dr\,dz\,dt\\ \int_0^{2\pi}\int_{-\sqrt{2}}^{\sqrt{2}} \frac 14 (2-z)^2 \,dz\,dt\\ \int_0^{2\pi} -\frac 1{12} (2-z)^3\,dt\\ \frac 1{12} \int_0^{2\pi} (2+\sqrt{2})^3-(2-\sqrt{2})^3\,dt\\ \frac {\pi}{6} [(2+\sqrt{2})^3-(2-\sqrt{2})^3]$</p> <p>$\frac{14}{3} \pi\sqrt{2}$</p>
1,345,364
<p>I am struggling with this question: </p> <blockquote> <p>Let $\{a_n\}$ be defined recursively by $a_1=\sqrt2$, $a_{n+1}=\sqrt{2+a_n}$. Find $\lim\limits_{n\to\infty}a_n$. HINT: Let $L=\lim\limits_{n\to\infty}a_n$. Note that $\lim\limits_{n\to\infty}a_{n+1}=\lim\limits_{n\to\infty}a_n$, so $\lim\limits_{n\to\infty}\sqrt{2+a_n}=L$. Using the properties of limits, solve for $L$.</p> </blockquote> <p>I just don't know how I am suppose to find the limit of that or what my first step is. Any help?</p>
Chiranjeev_Kumar
171,345
<p>$$L=\sqrt{2+L}$$, squaring both side we have,</p> <p>$$L^2-L-2=0$$</p> <p>$$(L-2)(L+1)=0$$</p> <p>which gives $L=2,-1$</p> <p>since $a_n\gt 0$, for all $n\in \Bbb N$, Hence $L=2$</p>
24,704
<p>It seems that often in using counting arguments to show that a group of a given order cannot be simple, it is shown that the group must have at least <span class="math-container">$n_p(p^n-1)$</span> elements, where <span class="math-container">$n_p$</span> is the number of Sylow <span class="math-container">$p$</span>-subgroups.</p> <blockquote> <p>It is explained that the reason this is the case is because distinct Sylow <span class="math-container">$p$</span>-subgroups intersect only at the identity, which somehow follows from Lagrange's Theorem.</p> </blockquote> <p>I cannot see why this is true. </p> <p>Can anyone quicker than I tell me why? I know it's probably very obvious. </p> <p>Note: This isn't a homework question, so if the answer is obvious I'd really just appreciate knowing why. </p> <p>Thanks!</p>
Plop
2,660
<p>That's because it is not true in general. Look at $2$-Sylows in $S_5$: they have nontrivial intersection.</p>
24,704
<p>It seems that often in using counting arguments to show that a group of a given order cannot be simple, it is shown that the group must have at least <span class="math-container">$n_p(p^n-1)$</span> elements, where <span class="math-container">$n_p$</span> is the number of Sylow <span class="math-container">$p$</span>-subgroups.</p> <blockquote> <p>It is explained that the reason this is the case is because distinct Sylow <span class="math-container">$p$</span>-subgroups intersect only at the identity, which somehow follows from Lagrange's Theorem.</p> </blockquote> <p>I cannot see why this is true. </p> <p>Can anyone quicker than I tell me why? I know it's probably very obvious. </p> <p>Note: This isn't a homework question, so if the answer is obvious I'd really just appreciate knowing why. </p> <p>Thanks!</p>
Derek Holt
2,820
<p>In some situations, to prove that groups of order $n$ cannot be simple, you can use the counting argument if all Sylow subgroups have trivial intersection, and a different argument otherwise.</p> <p>For example let $G$ be a simple group of order $n=144 = 16 \times 9$. The number $n_3$ of Sylow 3-subgroups is 1, 4 or 16. If $n_3 = 1$ then there is normal Sylow subgroup and if $n_3= 4$ then $G$ maps nontrivially to $S_4$, so we must have $n_3 = 16$.</p> <p>If all pairs of Sylow 3-subgroups have trivial intersection, then they contain in total $16 \times 8$ non-identity elements, so the remaining 16 elements must form a unique and hence normal Sylow 2-subgroup of $G$.</p> <p>Otherwise two Sylow 3-subgroups intersect in a subgroup $T$ of order 3. Then the normalizer $N_G(T)$ of $T$ in $G$ contains both of these Sylow 3-subgroups, so by Sylow's theorem it has at least 4 Sylow 3-subgroups, and hence has order at least 36, so $|G:N_G(T)| \le 4$ and $G$ cannot be simple.</p>
4,112,771
<p>This is <strong>Exercise 2.3.4</strong> of Robinson's <em>&quot;A Course in the Theory of Groups (Second Edition)&quot;</em>. My universal algebra is a little rusty, so <a href="https://math.stackexchange.com/q/3590724/104041">this question</a> is not what I'm looking for; besides, I ought to be able to use tools given in Robinson's book.</p> <h2>The Details:</h2> <p>On page 56, <em>ibid.</em>,</p> <blockquote> <p>Let <span class="math-container">$F$</span> be a free group on a countably infinite set <span class="math-container">$\{x_1,x_2,\dots\}$</span> and let <span class="math-container">$W$</span> be a nonempty subset of <span class="math-container">$F$</span>. If <span class="math-container">$w=x_{i_1}^{l_1}\dots x_{i_r}^{l_r}\in W$</span> and <span class="math-container">$g_1,\dots, g_r$</span> are elements of a group <span class="math-container">$G$</span>, we define the <em>value</em> of the word <span class="math-container">$w$</span> at <span class="math-container">$(g_1,\dots,g_r)$</span> to be <span class="math-container">$w(g_1,\dots,g_r)=g_1^{l_1}\dots g_{r}^{l_r}$</span>. The subgroup of <span class="math-container">$G$</span> generated by all values in <span class="math-container">$G$</span> of words in <span class="math-container">$W$</span> is called the <em>verbal subgroup</em> of <span class="math-container">$G$</span> determined by <span class="math-container">$W$</span>,</p> <p><span class="math-container">$$W(G)=\langle w(g_1,g_2,\dots) \mid g_i\in G, w\in W\rangle.$$</span></p> </blockquote> <p>On page 57, <em>ibid.</em>,</p> <blockquote> <p>If <span class="math-container">$W$</span> is a set of words in <span class="math-container">$x_1, x_2, \dots$</span> and <span class="math-container">$G$</span> is any group, a normal subgroup <span class="math-container">$N$</span> is said to be <em><span class="math-container">$W$</span>-marginal</em> in <span class="math-container">$G$</span> if</p> <p><span class="math-container">$$w(g_1,\dots, g_{i-1}, g_ia, g_{i+1},\dots, g_r)=w(g_1,\dots, g_{i-1}, g_i, g_{i+1},\dots, g_r)$$</span></p> <p>for all <span class="math-container">$g_i\in G, a\in N$</span> and all <span class="math-container">$w(x_1,x_2,\dots,x_r)$</span> in <span class="math-container">$W$</span>. This is equivalent to the requirement: <span class="math-container">$g_i\equiv h_i \mod N, (1\le i\le r)$</span>, always implies that <span class="math-container">$w(g_1,\dots, g_r)=w(h_1,\dots, h_r)$</span>.</p> <p>[The] <span class="math-container">$W$</span>-marginal subgroups of <span class="math-container">$G$</span> generate a normal subgroup which is also <span class="math-container">$W$</span>-marginal. This is called <em>the <span class="math-container">$W$</span>-marginal of <span class="math-container">$G$</span></em> and is written <span class="math-container">$$W^*(G).$$</span></p> </blockquote> <p>On page 58, <em>ibid.</em>,</p> <blockquote> <p>If <span class="math-container">$W$</span> is a set of words in <span class="math-container">$x_1, x_2, \dots $</span>, the class of all groups <span class="math-container">$G$</span> such that <span class="math-container">$W(G)=1$</span>, or equivalently <span class="math-container">$W^*(G)=G$</span>, is called the <em>variety</em> <span class="math-container">$\mathfrak{B}(W)$</span> determined by <span class="math-container">$W$</span>.</p> </blockquote> <h2>The Question:</h2> <blockquote> <p>Prove that every variety is closed with respect to forming subgroups, images, and subcartesian products.</p> </blockquote> <h2>Thoughts:</h2> <p>This must have made sense to me at least two times (although it took some searching to confirm it):</p> <ul> <li><p>When I read Chapter 11 of <em>&quot;A Course in Universal Algebra&quot;</em> by Burris and Sankappanavar several years ago; this is tackled, as I said, <a href="https://math.stackexchange.com/q/3590724/104041">here</a>.</p> </li> <li><p>When I read Chapter 12 of Roman's <em>&quot;Fundamentals of Group Theory: An Advanced Approach&quot;</em> a few months ago. This is how I found out it's part of Birkhoff's Theorem. In the proof there, however, is the ever-infuriating &quot;It is clear&quot;; also: images are not covered.<span class="math-container">${}^\dagger$</span></p> </li> </ul> <p>I don't know why I've been stuck on it the last few days, but I have, so here I am.</p> <p>My rough understanding is that the processes of taking of subgroups, images, and subcartesian products mean they each inherit the words in <span class="math-container">$W$</span>, but this &quot;understanding&quot; is nothing more, I fear, than a restatement of the question.</p> <p>The chapter in Robinson's book is on free groups and presentations, so I've looked in many of my combinatorial group theory books, like Magnus <em>et al</em>., but the theorem is nowhere obvious.</p> <p>(I hope this is enough context.)</p> <p>I'm looking for something more than &quot;it is clear&quot; and less than a deep dive into <a href="/questions/tagged/universal-algebra" class="post-tag" title="show questions tagged &#39;universal-algebra&#39;" rel="tag">universal-algebra</a>.</p> <p>Please help :)</p> <hr /> <p><span class="math-container">$\dagger$</span> But quotients are, which, I suppose, is equivalent . . . I don't know.</p>
Sean Clark
1,306
<p>I can't tell if I am misunderstanding the question, so my apologies if the following answer is totally missing the point. It seems like the question is essentially: given a set of words <span class="math-container">$W$</span>, show that given <span class="math-container">$G,H\in \mathfrak B(W)$</span>, any subgroup of <span class="math-container">$G$</span>, homomorphic image of <span class="math-container">$G$</span>, and semidirect product <span class="math-container">$G\ltimes H$</span> is also in <span class="math-container">$\mathfrak B(W)$</span>. If that's the right understanding, then I believe the first two follow easily from writing things down.</p> <p>Suppose <span class="math-container">$G\in \mathfrak{B}(W)$</span>. If <span class="math-container">$G_1\leq G_2$</span>, it is pretty easy to see that <span class="math-container">$W(G_1)\leq W(G_2)$</span> (as the generators of <span class="math-container">$W(G_1)$</span> are a subset of generators of <span class="math-container">$W(G_2)$</span> by definition), so in particular if <span class="math-container">$H\subset G$</span>, <span class="math-container">$W(H)\leq W(G)=1$</span> thus <span class="math-container">$H\in \mathfrak{B}(W)$</span>.</p> <p>Similarly, if we have <span class="math-container">$\phi:G\rightarrow H$</span>, <span class="math-container">$W(\phi(G))=\phi(W(G))=1$</span> by the definition of a group homomorphism, so <span class="math-container">$\phi(G)\in \mathfrak{B}(W)$</span>.</p> <p>For the semidirect product, I guess this is easiest to see from the marginal perspective somehow, but it is eluding me at the moment.</p>
3,293,383
<p><span class="math-container">$$ \frac{ln{x}}{(x^3-1)} &lt;\frac{x}{x^3} , \forall x \in[2,\infty) $$</span></p> <p>This is specifically for an improper integral question, where the left term needs to be proven convergent or divergent for the interval <span class="math-container">$$ [2,\infty) $$</span></p>
Z Ahmed
671,540
<p>Let <span class="math-container">$$f(x)=\ln x-x+\frac{1}{x^2} \Rightarrow f'(x)=\frac{1}{x}-1-\frac{2}{x^3}&lt;0, ~\mbox{for}~x\ge 1. $$</span> So <span class="math-container">$f(x)$</span> is decreasing function on <span class="math-container">$[1,\infty].$</span> This means that <span class="math-container">$f(x) \le f(1) \Rightarrow f(x) \le 0$</span>. Rearrangin this we get <span class="math-container">$$\frac{\ln x}{x^3-1} \le \frac{x}{x^3},~~ x &gt; 1.$$</span></p>
2,811,155
<p>I learned recently that there are mathematical objects that can be proven to exist, but also that can be proven to be impossible to "construct". For example see this answer on MSE:<br> <a href="https://math.stackexchange.com/questions/2808804/does-the-existence-of-a-mathematical-object-imply-that-it-is-possible-to-constru/2808837#2808837">Does the existence of a mathematical object imply that it is possible to construct the object?</a></p> <p>Now my question is then, what does existence of an object really mean, if it is impossible to "find" it? What does it mean when we say that a mathematical object exists?</p> <hr> <p>Because the abstract nature of this question, and to make sure I understand myself what I'm asking, here is a more specific example.</p> <p>Say that if have shown that there exists a real number $x$ that satisfies some property. I always assumed that this means that it possible to find this number $x$ in the set of real numbers. It may be hard to describe this number, but I would assume it is at least possible, to construct a number $x$ that is provable a real number that satisfies the property.</p> <p>But if that does not have to be the case, if it is impossible to find the number $x$ that satisfies this property, what does this then mean? I just can't get my mind around this. Does that mean that there is some real number out there, but uncatchable somehow by the nature of its existence?</p>
Noah Schweber
28,111
<p>Arguably, your question cannot be answered in a satisfying way (unless you're a formalist).</p> <p>Ultimately most mathematicians don't spend too much time thinking about ontology - a sort of "naive Platonism" may be adopted, although when pressed I think we generally retreat from that stance - but I think the "standard" meaning is simply, "The existence of such an object is provable from the axioms of mathematics," and "the axioms of mathematics" is generally understood as referring to ZFC. So, e.g., when we say "We can prove that an object with property $P$ exists," what we mean is "ZFC proves '$\exists xP(x)$.'" This is a completely formalist approach; in particular, it renders a question like</p> <blockquote> <p>does that mean that there is some real number out there, but uncatchable somehow by the nature of its existence?</p> </blockquote> <p>irrelevant, since there is no "out there" being referred to. It is also completely unambiguous (up to a choice of how we express the relevant mathematical statement in the language of set theory). Of course, if you give me a precise notion of "construct" (or "catch") then the question "Why does (or does?) ZFC prove the existence of a non-constructable <em>(deliberately misspelled for clarity)</em> object?" <em>is</em> something we can address, but now it's not really about the nature of <em>mathematical existence</em> but rather the nature of ZFC as a theory.</p> <p>This response does ultimately just push the question back to why we privilege ZFC (and classical logic), and dodging this question (and any other formalist answer) grates against "realist" sensibilities. At the end of the day, <strong>the nature of existence is a philosophical, rather than mathematical, question</strong>; to my mind one of the main values of formalism is that it provides us with a language for doing mathematics which bridges philosophical differences. E.g. a large-cardinal-Platonist, an intuitionist, and an ultrafinitist will all agree with the statement "ZFC proves that there is an undetermined infinite game on $\omega$," regardless of their opinions on the statement "<em>There exists</em> an undetermined infinite game on $\omega$."</p>
458,922
<p>Recently I've stumbled across this claim:</p> <blockquote> <p>Peano axioms can be deduced in ZFC</p> </blockquote> <p>I found a lot of info regarding this claim (e.g. what would (one version of) the natural numbers look like within the universe of sets: $0 = \emptyset$, $n + 1 = n \cup \{n\}$), but not what the deductions (ZFC $\vdash$ PA) actually look like. What do they look like? </p>
hmakholm left over Monica
14,366
<p>Do you mean the original Peano Axioms (with an unrestricted second-order induction axiom), or Peano Arithmetic (known as PA, with a first-order axiom <em>scheme</em>)? Though it's not much different for the purpose of this question.</p> <p>Before you can start deriving the PA axioms, of course, you run into the problem that they are phrased in the language of integer arithmetic, whereas ZFC only proves things in the language of set theory. So you need to decide on some way to translate logical formulas in the language of arithmetic to set-theoretical formulas.</p> <p>It's conventional enough to represent $0$ as $\varnothing$ and $S(t)$ as $t\cup\{t\}$, and also to make sure every quantifier and free variable in the arithmetical formula are restricted to range over $\omega$. Representing addition and multiplication leaves some room for choice, though.</p> <p>If you're working in a not too barebones development of ZFC, it is natural to represent PA addition and multiplication by <em>ordinal addition</em> and <em>ordinal multiplication</em>, which you'll already have investigated as purely ZFC phenomenons. Deriving each of the PA axioms <em>except for</em> instances of the induction scheme is then a straightforward boring matter of definition chasing.</p> <p>Each instance of the induction axiom scheme now arises as a special case of a general induction theorem in ZFC:</p> <p>$$ \forall A\subseteq \omega: (\varnothing \in A \land \forall x\in A: x\cup\{x\}\in A)\to A = \omega $$</p> <p>In order to derive an instance of PA induction from this, simply let $A$ be $\{x\in\omega \mid \phi(x) \}$, where $\phi$ is the set-theoretic representation of the arithmetic formula you're inducting over.</p> <p>The induction theorem itself follows from the Axiom of Infinity. The details are a bit sensitive to the exact phrasing of the Axiom of Infinity you're working with, but typically the Axiom of Infinity will state that there exists at least one set $X$ such that $$\varnothing\in X ~\land~ \forall y\in X:y\cup\{y\}\in X$$ and $\omega$ is then defined as the intersection of all $X$ satisfying this condition. Notice that the condition is the same as the hypothesis about $A$ in the induction theorem. So if we have an $A\subseteq \omega$ that satisfies the hypothesis, then <em>by definition</em> $\omega$ is the intersection of a class of sets that includes $A$. In particular we must then have $\omega\subseteq A$, and since $A\subseteq \omega$ was assumed, we conclude $A=\omega$ as desired.</p> <p>Exactly how the above English proof becomes a fully formal proof in ZFC depends a lot of the precise formalizations of ZFC and first-order logic you're working with, but in ways that are not really specific to the application to PA.</p>
3,808,077
<p>I'm trying to show that <span class="math-container">$P(A\triangle B)=P(A)+P(B)–2P(A\cap B)$</span>. Knowing that <span class="math-container">$A\triangle B=(A\cap B^{c})\cup(A^{c} \cap B)$</span>.</p> <p>So, what I did was this:</p> <p><span class="math-container">\begin{equation*} \begin{aligned} P(A\triangle B)&amp;=P(A)+P(B)–2P(A\cap B)\\ P((A\cap B^{c})\cup(A^{c} \cap B))&amp;= P(A)+P(B)–2P(A\cap B)\\ P(A\cap B^{c})+P(A^{c}\cap B)-P((A\cap B^{c})\cap(A\cap B^{c}))&amp;=P(A)+P(B)–2P(A\cap B)\\ \end{aligned} \end{equation*}</span></p> <p>And the truth is, I got stuck there. I thought I'd solve it assuming that they were independent events but I don't know if I'm doing it right. I would appreciate your help and thank you in advance.</p>
CSch of x
566,601
<p>Using the fact that: <span class="math-container">$$P(K \cup R) = P(K) + P(R) - P( K \cap R)$$</span></p> <p>and substituting : <span class="math-container">$K = A \cap B^c$</span> and <span class="math-container">$R = A^c \cap B$</span> we get:</p> <p><span class="math-container">$$P ( A \triangle B) = P(A \cap B^c) + P(A^c \cap B) - P((A \cap B^c) \cap (A^c \cap B)) $$</span></p> <p><span class="math-container">$$= P (A \setminus B) + P(B \setminus A) - P( \emptyset) = P(A) - P(A \cap B) + P(B) - P(A \cap B) = \\ P(A) + P(B) - 2P(A \cap B)$$</span></p> <p>Recall that <span class="math-container">$P(A \setminus B) = P(A) - P(A \cap B)$</span> ( I will leave the proof to you... for more information see: <a href="https://math.stackexchange.com/questions/2063036/proof-of-probability-of-set-difference">Proof of Probability of Set Difference</a>)</p>
289,708
<p>The <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="noreferrer">Catalan numbers</a> <span class="math-container">$C_n$</span> count both </p> <ol> <li>the Dyck paths of length <span class="math-container">$2n$</span>, and </li> <li>the ways to associate <span class="math-container">$n$</span> repeated applications of a binary operation. </li> </ol> <p>We call the latter <em>magma expressions</em>; we will explain below.</p> <p><strong>Dyck paths, and their lattice structure</strong></p> <p>A <em>Dyck path of length <span class="math-container">$2n$</span></em> is a sequence of <span class="math-container">$n$</span> up-and-right strokes and <span class="math-container">$n$</span> down-and-right strokes, all having equal length, such that the sequence begins and ends on the same horizontal line and never passes below it. A picture of the five length-6 Dyck paths is shown here:</p> <pre><code>A: B: C: D: E: /\ / \ /\/\ /\ /\ / \ / \ / \/\ /\/ \ /\/\/\ </code></pre> <p>There is an order relation on the set of length-<span class="math-container">$2n$</span> Dyck paths: <span class="math-container">$P\leq Q$</span> if <span class="math-container">$P$</span> fits completely under <span class="math-container">$Q$</span>; I'll call it the <em>height order</em>, though in the title of the post, I called it "Dyck order". I've been told it should be called the Stanley lattice order. For <span class="math-container">$n=3$</span> it gives the following lattice:</p> <pre><code> A | B / \ C D \ / E </code></pre> <p>For any <span class="math-container">$n$</span>, one obtains a poset structure on the set of length-<span class="math-container">$2n$</span> Dyck paths using height order, and in fact this poset is always a Heyting algebra (it represents the subobject classifier for the topos of presheaves on the twisted arrow category of <span class="math-container">$\mathbb{N}$</span>, the free monoid on one generator; see <a href="https://mathoverflow.net/questions/272394/reference-request-heyting-algebra-structure-on-catalan-numbers">this mathoverflow question</a>).</p> <p><strong>Magma expressions and the "exponential evaluation order"</strong></p> <p>A set with a binary operation, say •, is called a <a href="https://ncatlab.org/nlab/show/magma" rel="noreferrer">magma</a>. By a <em>magma expression of length <span class="math-container">$n$</span></em>, we mean a way to associate <span class="math-container">$n$</span> repeated applications of the operation. Here are the five magma expressions of length 3:</p> <pre><code>A: B: C: D: E: a•(b•(c•d)) a•((b•c)•d) (a•b)•(c•d) (a•(b•c))•d ((a•b)•c)•d </code></pre> <p>It is well-known that the set of length-<span class="math-container">$n$</span> magma expressions has the same cardinality as the set of length-<span class="math-container">$2n$</span> Dyck paths: they are representations of the <span class="math-container">$n$</span>th Catalan number.</p> <p>An <a href="http://www.labri.fr/perso/courcell/Textes/BC-Raoult%281980%29.pdf" rel="noreferrer">ordered magma</a> is a magma whose underlying set is equipped with a partial order, and whose operation preserves the order in both variables. Given an ordered magma <span class="math-container">$(A,$</span>•<span class="math-container">$,\leq)$</span>, and magma expressions <span class="math-container">$E(a_1,\ldots,a_n)$</span> and <span class="math-container">$F(a_1,\ldots,a_n)$</span>, write <span class="math-container">$E\leq F$</span> if the inequality holds for every choice of <span class="math-container">$a_1,\ldots,a_n\in A$</span>. Call this the <em>evaluation order</em>.</p> <p>Let <span class="math-container">$P=\mathbb{N}_{\geq 2}$</span> be the set of natural numbers with cardinality at least 2, the <em>logarithmically positive</em> natural numbers. Equipped with the operation given by exponentiation, <span class="math-container">$c$</span>•<span class="math-container">$d\:=c^d$</span>, we obtain an ordered magma, using the usual <span class="math-container">$\leq$</span>-order. Indeed, if <span class="math-container">$2\leq a\leq b$</span> and <span class="math-container">$2\leq c\leq d$</span> then <span class="math-container">$a^c\leq b^d$</span>.</p> <p><strong>Question:</strong> Is the exponential evaluation order on length-<span class="math-container">$n$</span> expressions in the ordered magma <span class="math-container">$(P,$</span>^<span class="math-container">$,\leq)$</span> isomorphic to the height order on length-<span class="math-container">$2n$</span> Dyck paths?</p> <p>I know of no <em>a priori</em> reason to think the answer to the above question should be affirmative. A categorical approach might be to think of the elements of <span class="math-container">$P$</span> as sets with two special elements, and use them to define injective functions between Hom-sets, e.g. a map <span class="math-container">$$\mathsf{Hom}(c,\mathsf{Hom}(b,a))\to\mathsf{Hom}(\mathsf{Hom}(c,b),a).$$</span> However, while I can define the above map, I'm not sure how to generalize it. And the converse, that being comparable in the exponential evaluation order means that one can define a single injective map between hom-sets, is not obvious to me at all.</p> <p>However, despite the fact that I don't know where to look for a proof, I do have evidence to present in favor of an affirmative answer to the above question.</p> <p><strong>Evidence that the orders agree</strong></p> <p>It is easy to check that for <span class="math-container">$n=3$</span>, these two orders do agree:</p> <pre><code> a^(b^(c^d)) A := A(a,b,c,d) | | a^((b^c)^d) B / \ / \ (a^b)^(c^d) (a^(b^c))^d C D \ / \ / ((a^b)^c)^d E </code></pre> <p>This can be seen by taking logs of each expression. (To see that C and D are incomparable: use a=b=c=2 and d=large to obtain C>D; and use a=b=d=2 and c=large to obtain D>C.) Thus the evaluation order on length-3 expressions in <span class="math-container">$(P,$</span>^<span class="math-container">$,\leq)$</span> agrees with the height order on length <span class="math-container">$6$</span> Dyck paths.</p> <p>(Note that the answer to the question would be negative if we were to use <span class="math-container">$\mathbb{N}$</span> or <span class="math-container">$\mathbb{N}_{\geq 1}$</span> rather than <span class="math-container">$P=\mathbb{N}_{\geq2}$</span> as in the stated question. Indeed, with <span class="math-container">$a=c=d=2$</span> and <span class="math-container">$b=1$</span>, we would have <span class="math-container">$A(a,b,c,d)=2\leq 16=E(a,b,c,d)$</span>.)</p> <p>It is even easier to see that the orders agree in the case of <span class="math-container">$n=0,1$</span>, each of which has only one element, and the case of <span class="math-container">$n=2$</span>, where the order <span class="math-container">$(a^b)^c\leq a^{(b^c)}$</span> not-too-surprisingly matches that of length-4 Dyck paths:</p> <pre><code> /\ /\/\ ≤ / \ </code></pre> <p>Indeed, the order-isomorphism for <span class="math-container">$n=2$</span> is not too surprising because there are only two possible partial orders on a set with two elements. However, according to <a href="https://oeis.org/A000112" rel="noreferrer">the OEIS</a>, there are 1338193159771 different partial orders on a set with <span class="math-container">$C_4=14$</span> elements. So it would certainly be surprising if the evaluation order for length-4 expressions in <span class="math-container">$(P,$</span>^<span class="math-container">$,\leq)$</span> were to match the height order for length-8 Dyck paths. But after some tedious calculations, I have convinced myself that these two orders in fact <em>do agree</em> for <span class="math-container">$n=4$</span>! Of course, this could just be a coincidence, but it is certainly a striking one.</p> <p><strong>Thoughts?</strong></p>
Timothy Chow
3,106
<p><b>EDIT:</b> I can complete half of the proof, showing that the magma order refines the Dyck order.</p> <p><hr> Following Martin Rubey's comment, there is a standard bijection between association orders and Dyck paths that uses <a href="https://en.wikipedia.org/wiki/Reverse_Polish_notation" rel="noreferrer">reverse Polish notation</a> (RPN). For $n=3$, the five association orders, when written in RPN, are</p> <pre> a b c d ^ ^ ^ a b c ^ d ^ ^ a b ^ c d ^ ^ a b c ^ ^ d ^ a b ^ c ^ d ^ </pre> <p>If we ignore the initial <code>a</code> and interpret letters as up strokes and carets as down strokes then we get Dyck paths. The Dyck order is generated by the operation "replace <code>x ^</code> with <code>^ x</code>" (where <code>x</code> is any letter). So proving your claim reduces to showing that</p> <ol> <li><p>if you replace <code>x ^</code> with <code>^ x</code> then the value of the entire expression decreases, for all choices of values (from $\mathbb{N}_{\ge2}$) of the variables; and</p></li> <li><p>if you have a pair of RPN expressions such that you cannot get from one to the other by a sequence of such replacements, then you can get either expression to be larger than the other by suitably choosing values (from $\mathbb{N}_{\ge2}$) for the variables.</p></li> </ol> <p>To prove part 1, note first that in a fixed RPN expression, weakly increasing the value of any variable causes the overall value to weakly increase, by the ordered magma property.</p> <p>Now consider two valid RPN expressions $\alpha$ and $\beta$ that differ only in that at one point, $\alpha$ has <code>x ^</code> while $\beta$ has <code>^ x</code>. Just after completing this part of the calculation, stack $\alpha$ will have $A,B^C$ on top while stack $\beta$ will have $A^B,C$ on top, for some $A$, $B$, and $C$ in $\mathbb{N}_{\ge2}$. If we continue the calculation until just before the first caret that affects $A$ in stack $\alpha$ (equivalently, until the first caret that affects $A^B$ in stack $\beta$), then the top of stack $\alpha$ will look like $A, B^{CD}$ (followed by a caret) while the top of stack $\beta$ will look like $A^B, C^D$ (followed by a caret) for some $D$ (possibly equal to 1, in the case where said caret shows up immediately). Applying the caret then yields $A^{B^{CD}}$ on stack $\alpha$ and $A^{BC^D}$ on stack $\beta$. But $B^{CD} = (B^C)^D \ge (BC)^D \ge (B^{1/D}C)^D = BC^D$ for all $B, C \in\mathbb{N}_{\ge2}$ and $D\ge1$. So the value on stack $\alpha$ at this stage is $\ge$ the value on stack $\beta$. Since the remainder of the computation is the same for both stacks, the eventual value of $\alpha$ will be $\ge$ the eventual value of $\beta$.</p> <p>It seems very likely to me that we can prove part 2 by finding a place $P$ where Dyck path 1 exceeds Dyck path 2 and another place $Q$ where Dyck path 2 exceeds Dyck path 1, and inserting an extremely large number at one of these points to force whichever expression we want to be larger. But I haven't quite figured out how to say this rigorously.</p>
1,120,816
<p>$$ f(x) = \begin{cases} x^{-1} &amp; \text{for $x&lt;-1$} \\ ax+b &amp; \text{for $-1\le x\le \frac 12$} \\ x^{-1} &amp; \text{for $x&gt;\frac 12$} \\ \end{cases}$$</p> <p>I don't understand how I am supposed to find the value of the constants. It seems as if there is not enough information to determine that. I did a problem in which it had only one constant, $c$ and I was easily able to determine the value of it by setting both pieces of the function equal to each other and evaluating them at the $x$ values. How would I go about doing this here?</p>
Tim Raczkowski
192,581
<p>$\lim_{x\to-1^-}f(x)=-1$, So we need $ax+b=-1$ for $x=-1$. Hence $b-a=-1$. On the other hand, $\lim_{x\to1/2^+}f(x)=2$. Hence $\frac12a+b=2$.</p>
2,533,834
<p>For a complex number $z$, I came across a statement that $\ln(e^{z})$ is not always equal to $z$. Why is this true?</p> <p>Thanks for the help.</p>
blat
382,972
<p>Let $C \in \Bbb R$. Choose $n \ge e^C $. Then we have (note that $\ln$ is increasing)</p> <p>$$\ln(n) \ge ln(e^C) = C.$$</p> <p>That means $\ln$ is not bounded and hence it diverges.</p>
195,790
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/19796/name-of-this-identity-int-e-alpha-x-cos-beta-x-space-dx-frace-al">Name of this identity? $\int e^{\alpha x}\cos(\beta x) \space dx = \frac{e^{\alpha x} (\alpha \cos(\beta x)+\beta \sin(\beta x))}{\alpha^2+\beta^2}$</a> </p> </blockquote> <p>I might have missed a technique from Calc 2, but this integral is holding me up. When I checked with WolframAlpha, it used a formula I didn't recognise.</p> <blockquote> <p>How do I solve $\int e^{-t/2}\sin(3t) dt$?</p> </blockquote> <p>The formula WolframAlpha uses is this:</p> <p>$$\int e^{\alpha t}\sin(\beta t)dt=\frac{e^{\alpha t}(-\beta \cos(\beta t)+\alpha \sin(\beta t)}{\alpha ^2+\beta ^2}$$</p> <p>I don't know where this formula comes from.</p>
Cameron Buie
28,900
<p>The answers above are good. So far, in each of them, and in your question itself, the difference quotient is taken to be $$\frac{f(x+h)-f(x)}h,\quad h\neq 0.\tag{1}$$ (The $h\neq 0$ part is important, since the expression is meaningless if $h=0$.) Now, there's no real problem with using this--and in fact, it is equivalent to the alternative method I'll use here. (I'll explain the equivalence after I show you my alternative approach, though it'll hopefully be clear how they're connected after some brief inspection.) In this instance (and indeed, I'd say generally, when dealing with polynomials and "simple" rational functions), the method I'll demonstrate has the advantage of not having to go through the tedious process of computing binomial expansions, so long as you bear one particular trick in mind.</p> <hr> <p>Your original question is about how to find the limit of the difference quotient as $x\to x_0$, and I suspect that some of your confusion stems from the fact that the $h$ term doesn't go away when you take $x\to x_0$. You <em>could</em> replace $x$ by $x_0$ in $(1)$ to get $$\frac{f(x_0+h)-f(x_0)}h,\quad h\neq 0,\tag{$1'$}$$ then proceed as described in the other answers to take care of the $h$ factor in the denominator. I will use instead the difference quotient $$\frac{f(x)-f(x_0)}{x-x_0},\quad x\neq x_0.\tag{2}$$ (Again, it's important that $x\neq x_0$.)</p> <p>Well, the only real trick we have to remember here is that for any $a,b$, we have $a^2-b^2=(a+b)(a-b)$--the factoring formula for a difference of squares. Now, the difference quotient $(2)$ becomes $$\frac1{x-x_0}\cdot\left(\frac2{x^2}-\frac2{x_0^2}\right)=\frac1{x-x_0}\cdot\left(\frac{2x_0^2}{x_0^2x^2}-\frac{2x^2}{x_0^2x^2}\right)=\frac{2(x_0^2-x^2)}{x_0^2x^2(x-x_0)}=\frac{-2(x^2-x_0^2)}{x_0^2x^2(x-x_0)},$$ so by the difference of squares formula, $$\frac{f(x)-f(x_0)}{x-x_0}=\frac{-2(x+x_0)(x-x_0)}{x_0^2x^2(x-x_0)}=\frac{-2(x+x_0)}{x_0^2x^2}.\tag{3}$$ Now, $(3)$ holds true whenever $x_0,x,x-x_0\neq 0$--that is, whenever $x,x_0$ are distinct non-$0$ numbers. This is quite natural, since $0$ isn't in the domain of $f$--that is, $f(0)$ is not defined, so we can't let $x=0$ or $x_0=0$ in $(2)$ without it losing its meaning--and we've already restricted $x$ to values distinct from $x_0$ in the very definition of the difference quotient $(2)$. At this point, we can easily take the limit as $x\to x_0$, since we no longer have to worry about a zero denominator. (Hover over the white space below if you'd like to check your work against mine.)</p> <blockquote class="spoiler"> <p> For any (fixed) non-$0$ $x_0$, we have by $(3)$ that $$\begin{equation*}\lim\limits_{x\to x_0}\cfrac{f(x)-f(x_0)}{x-x_0}=\lim\limits_{x\to x_0}\cfrac{-2(x+x_0)}{x_0^2x^2}=\cfrac{-2(x_0+x_0)}{x_0^2x_0^2}=\cfrac{-4x_0}{x_0^4}=\cfrac{-4}{x_0^3}.\end{equation*}$$</p> </blockquote> <p>The answer achieved here is the same as by the other method. In case you've not already observed it, the trick for getting from $(1')$ to $(2)$ is to define $x:=x_0+h$, so $x\to x_0$ precisely as $h\to 0$, and $h\neq 0$ precisely when $x\neq x_0$. A similar trick will get us from $(2)$ to $(1')$. Thus, the methods are equivalent.</p> <hr> <p><strong>The Trick</strong>: Simply knowing the difference of squares formula won't be enough, usually, but the idea will be the same. First, let me introduce some abbreviating notation that you may or may not be familiar with. Given any two integers $k,n$ with $k\leq n$, and any function $g(x)$ defined on the integers from $k$ to $n$ (inclusive), we define $$\sum_{j=k}^ng(j)$$ to be the sum of $g(k), g(k+1),...,g(n)$--that is, the sum of all $g(j)$ with $j$ an integer ranging from $k$ to $n$, inclusive.</p> <p>I claim that for any $a,b$ and any nonnegative integer $n$, we have $$a^{n+1}-b^{n+1}=(a-b)\sum_{j=0}^na^{n-j}b^j.\tag{#}$$ Let's look at some examples to see how that $\Sigma$ expression expands out: </p> <p>$$\underline{n=0}:\quad \sum_{j=0}^0a^{0-j}b^j=1\;\;(=a^{0-0}b^0)$$</p> <p>$$\underline{n=1}:\quad \sum_{j=0}^1a^{1-j}b^j=a+b\;\;(=a^{1-0}b^0+a^{1-1}b^1)$$</p> <p>$$\underline{n=2}:\quad \sum_{j=0}^2a^{2-j}b^j=a^2+ab+b^2\;\;(=a^{2-0}b^0+a^{2-1}b^1+a^{2-2}b^2)$$</p> <p>$$\underline{n=3}:\quad \sum_{j=0}^3a^{3-j}b^j=a^3+a^2b+ab^2+b^3\;\;(=a^{3-0}b^0+a^{3-1}b^1+a^{3-2}b^2+a^{3-3}b^3)$$</p> <p>(Seeing the pattern, here? Start with $a^n$, then keep ticking the $a$'s exponent down by $1$ and the $b$'s exponent up by $1$ until you get to $b^n$.) It's easy to see that the $n=1$ case, in particular, simply gives us the difference of squares formula--which we already know works, and shows that we'll really be doing the same sort of thing even when we're dealing with integer powers greater than $2$. Now, let's see what happens in the general case when we expand the right-hand side of $(\#)$--I promise this is the only polynomial expansion we'll do: $$(a-b)\sum_{j=0}^na^{n-j}b^j=\sum_{j=0}^n(a-b)a^{n-j}b^j=\sum_{j=0}^n\left(a^{n+1-j}b^j-a^{n-j}b^{j+1}\right)$$ Now, if we rewrite the expression at the far right in the longer form, we get $$\left(a^{n+1}-a^nb\right)+\left(a^nb-a^{n-1}b^2\right)+\cdots+\left(a^2b^{n-1}-ab^n\right)+\left(ab^n-b^{n+1}\right).$$ In all (except the final) parenthesized pair, the second entry cancels with the first entry of the next pair. This gets rid of any pairs that aren't on the far left or far right, gets rid of the second entry on the far left pair, and gets rid of the first entry on the far right pair, leaving us with only $$a^{n+1}-b^{n+1},$$ as desired.</p> <hr> <p><strong>The General Case</strong>: We can easily use the trick to simplify the difference quotient of any polynomial or "simple" rational functions. By "simple" rational function, I mean a function having form $$\frac{\alpha}{(x-\beta)^m},\quad x\neq \beta$$ for some integer $m\geq 1$ and some constants $\alpha,\beta$--in your example, we had $\alpha=2$, $\beta=0$, $m=2$. If the denominator is less nice, or if the numerator is a non-constant polynomial, we may end up having to expand some polynomial multiplication and end up doing at least as much work as if we'd just used the other method (with the $h$) in the first place.</p> <p>I'll go ahead and apply it to an arbitrary simple rational function $f(x)=\frac{\alpha}{(x-\beta)^m},\: x\neq \beta$. The $m=1$ case is easy, so let's assume $m=n+1$ for some integer $n\geq 1$. Then the difference quotient is simplified using the trick, in a similar fashion to the work we did above, noting that $(x-\beta)-(x_0-\beta)=x-x_0$: $$\frac{f(x)-f(x_0)}{x-x_0}=\frac\alpha{(x-\beta)^{n+1}(x_0-\beta)^{n+1}(x-x_0)}\left((x_0-\beta)^{n+1}-(x-\beta)^{n+1}\right)=\frac{-\alpha}{(x-\beta)^{n+1}(x_0-\beta)^{n+1}}\sum_{j=0}^n(x_0-\beta)^{n-j}(x-\beta)^j,$$ and so $$\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}=\frac{-\alpha}{(x_0-\beta)^{2n+2}}\sum_{j=0}^n(x_0-\beta)^n.$$ (Hover over the white space below for the punchline.)</p> <blockquote class="spoiler"> <p> Now, we're adding up $n+1$ "$(x_0-\beta)^n$" terms in that $\Sigma$ expression (don't forget the $0$th term!), so (remembering that $m=n+1$), we get $$\begin{equation*}\lim\limits_{x\to x_0}\cfrac{f(x)-f(x_0)}{x-x_0}=\cfrac{-(n+1)\alpha (x_0-\beta)^n}{(x_0-\beta)^{2n+2}}=\cfrac{-(n+1)\alpha}{(x_0-\beta)^{n+2}}=\cfrac{-m\alpha}{(x_0-\beta)^{m+1}}.\end{equation*}$$</p> </blockquote> <p>If you're dealing with a polynomial, instead, there are only a few differences in the approach and application. For one, you won't need to combine the expressions over a common denominator, and so that "$-$" doesn't show up. For another, it's probably easier to deal with each monomial <em>individually</em>, and then put it all together--that is, if (for example) $f(x)=\pi x^9-\sqrt{21}x^7+2x^2$, then we find (using the trick as above) that the respective limits (as $x\to x_0$) of difference quotients of $\pi x^9$, $-\sqrt{21}x^7$, $2x^2$ are $9\pi x_0^8$, $-7\sqrt{21}x_0^6$, $4x_0$. Then we'll just add them up to get $$\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}=9\pi x_0^8-7\sqrt{21}x_0^6+4x_0.$$</p> <p>We <strong>can</strong> still apply this trick when we're dealing with other rational functions--for example, partial fraction expansions put even the messiest rational function into the form of a sum of only polynomials and simple rational functions, which we'd then deal with individually and add up (in practice, partial fraction decompositions can be obnoxious to find)--but those are a bear to deal with, and I doubt you'll be asked to do difference quotient limit evaluations for those.</p>
4,099,804
<p>I need to characterize every finitely generated abelian group G that has the following property: <span class="math-container">$$\frac{G}{S} \text{ is cyclic for every } \lbrace0\rbrace \lneq S\leq G$$</span> Given the problems before this one, I believe I am supposed to use the structure theorem figure out the underlying structure of the decomposition (for example that the decomposition has only one or two primes and such). I know the question in <a href="https://math.stackexchange.com/questions/4097319/f-g-abelian-group-so-that-every-quotient-is-cyclic">this post</a> is very similar, but it is not precisely the same and the slight difference in the conditions on the subgroups makes the argument non-applicable, unfortunately.</p>
Community
-1
<p>Let <span class="math-container">$G=\langle x_1,\dots, x_n\rangle $</span>. Then <span class="math-container">$G/\langle x_1\rangle\cong \langle x_2,\dots,x_n\rangle $</span> is cyclic. It follows that <span class="math-container">$G $</span> is generated by two elements.</p> <p>More can be said: it's easy to see that, in case <span class="math-container">$G $</span> is not cyclic, we have <span class="math-container">$G\cong\mathbb Z_p×\mathbb Z_p $</span>, for <span class="math-container">$p $</span> prime.</p>
608,909
<p>Is it possible to solve analytically the following equation? $$\left(x+\frac{1}{x}\right)^{\frac{1}{x}}=A$$ with $A\gt 1$? I tried to transform it in the following: $\frac{1}{x}\ln\left(x+\frac{1}{x}\right)=B$ with $B=\ln(A)$, but it seems to be still unsolvable. Is there some trick to solve it? Thanks.</p>
Stefan Hamcke
41,672
<p><span class="math-container">$\require{AMScd}$</span> There is a retraction of <span class="math-container">$D^n\times I\twoheadrightarrow D^n×\{0\}\cup S^{n-1}×I$</span> defined via <span class="math-container">$$r(x,t)=\begin{cases} \left(\frac{2x}{2-t},\ 0\right) &amp;\text{, if }t\le2(1-||x||) \\ \left(\frac x{||x||},2-\frac{2-t}{||x||}\right)&amp;\text{, if }t\ge2(1-||x||) \end{cases}$$</span> It is easy to prove that this map is well-defined and continuous and a retraction. Then <span class="math-container">$$d:D^n×I×I\to D^n×I\\ d(x,t,s)=sr(x,t)+(1-s)(x,t)$$</span> is a homotopy between the identity and <span class="math-container">$r$</span>, so <span class="math-container">$r$</span> is a deformation retraction. But then <span class="math-container">$(D^n×I)\cup_H X$</span> deformation retracts onto <span class="math-container">$(D^n×\{0\}\cup S^{n-1}×I)\cup_H X=(D^n×\{0\})\cup_f X$</span></p> <p>Note that a pushout square (<span class="math-container">$A,X$</span>, and <span class="math-container">$B$</span> are arbitrary spaces)</p> <p><span class="math-container">\begin{CD} A @&gt;f&gt;&gt; B\\@ViVV @VV\tilde iV\\ X @&gt;&gt;\tilde f&gt; X\cup_f B \end{CD}</span> gives rise to a pushout square <span class="math-container">\begin{CD} A\times I @&gt;f&gt;&gt; B \times I\\@ViVV @VV\tilde iV\\ X\times I @&gt;&gt;\tilde f&gt; (X\cup_f B)\times I \end{CD}</span></p> <p>because the quotient map <span class="math-container">$q:X\sqcup B\to X\cup_f B$</span> induces a quotient map <span class="math-container">$q\times 1:X\times I\sqcup B\times I\to(X\cup_f B)\times I$</span>.<br /> This means that a pair of homotopies <span class="math-container">$F_t:X→Y$</span>, <span class="math-container">$G_t:B→Y$</span>, such that <span class="math-container">$F_ti=G_t f$</span> for all <span class="math-container">$t\in I$</span>, induces a homotopy <span class="math-container">$H_t:X∪_f B→Y$</span><br /> That's the reason why a deformation retraction on <span class="math-container">$D^n×I$</span> induces a deformation retraction on the pushout <span class="math-container">$(D^n×I)\cup_F X$</span></p> <p>There is more general result: If <span class="math-container">$(X,A)$</span> is cofibered, then <span class="math-container">$X×I$</span> deformation retracts to <span class="math-container">$X×\{0\}\cup A×I$</span>, so if <span class="math-container">$X$</span> is glued via two homotopic maps <span class="math-container">$f$</span> and <span class="math-container">$g$</span> to a space <span class="math-container">$B$</span>, then <span class="math-container">$X\cup_f B$</span> and <span class="math-container">$X\cup_g B$</span> are homotopy equivalent.</p>
4,187,498
<p>I am studying the proof of the Prime Number Theorem and I want to show that the function <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> has a simple pole at <span class="math-container">$s=1$</span>.</p> <p>I think that if I can find the Laurent series expansion of <span class="math-container">$\zeta(s)$</span>, I could then find the same for <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> and then conclude that it has a simple pole at <span class="math-container">$s=1$</span>.(Correct me if I am wrong.)</p> <p>But, how do I find the Laurent expansion ? I know that <span class="math-container">$\zeta(s)$</span> has a simple pole at <span class="math-container">$s=1$</span> but how can I use this to find the complete expansion ? Also, do I even need to find the complete expansion to show that <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> has a simple pole at <span class="math-container">$s=1$</span> ? Is there any other way ?</p> <p>Please help. Any help/hint shall be highly appreciated.</p>
Joshua Stucky
749,752
<p>If all one cares about is knowing that there is a simple pole at <span class="math-container">$s=1$</span> (and perhaps what its residue is), this can actually be done quite quickly using some standard complex-analytic results. For a reference, see <a href="https://terrytao.wordpress.com/2014/12/05/245a-supplement-2-a-little-bit-of-complex-and-fourier-analysis/" rel="nofollow noreferrer">these notes by Terry Tao</a>, and in particular Exercise 14. By partial summation (or Euler-Maclaurin; they're essentially the same thing), we have</p> <p><span class="math-container">$$ \zeta(s) = \frac{1}{s-1} + s\int_{1}^{\infty} \frac{\left\{ x\right\}}{x^{s+1}} dx, $$</span> where <span class="math-container">$\left\{ x\right\}$</span> denotes the fractional part of <span class="math-container">$s$</span>. The integral converges absolutely for <span class="math-container">$\Re s &gt; 0$</span>, and so has no poles in this region. Thus <span class="math-container">$\zeta(s)$</span> has a simple pole at <span class="math-container">$s=1$</span> with residue <span class="math-container">$1$</span>.</p> <p>For a meromorphic function <span class="math-container">$f$</span>, the only poles of <span class="math-container">$f'/f$</span> are simple poles occuring at the poles and zeros of <span class="math-container">$f$</span>. Thus <span class="math-container">$\zeta'(s)/\zeta(s)$</span> has a simple pole at <span class="math-container">$s=1$</span>. In fact, the residue is the negative order of pole, so the residue at <span class="math-container">$s=1$</span> of <span class="math-container">$\zeta'(s)/\zeta(s)$</span> is <span class="math-container">$-1$</span>.</p>
523,932
<p>I've got a system of equations which is:<br></p> <p>$\begin{cases} x=2y+1\\xy=10\end{cases}$</p> <p>I have gone into this: $x=\dfrac {10}y$. <br> How can I find the $x$ and $y$?</p>
Shravan40
40,230
<p><strong>Hint :</strong> </p> <p>This kind of equation can be solved by substituting the value of $ x $ or $ y $ in the first equation.And the above equation will become quadratic, solve for it</p> <p>$ x = 2y +1 \dots (1)$</p> <p>$xy = 10 $ $ \implies x = \frac{10}{y}$</p> <p>Put the value of x in equation (1)</p> <p>$ \frac{10}{y} = 2y+1 $</p> <p>$ 10 = 2y^2 + y $</p> <p>$ 2y^2 + y -10 = 0 \dots(2)$</p> <p>Solve this quadratic equation, For each value of $y$ you will get a $x$</p> <p>Same you can do it by replacing $ y = \frac{10}{x}$</p> <p>Hope, you can proceed from here. </p>
1,242,317
<p>If I have a unitary square matrix $U$ ie. $U^{\dagger}U=I$ ( $^\dagger$ stands for complex conjugate and transpose ), then for what cases is $U^{T}$ also unitary. One simple case I can think of is $U=U^{T}$ ( all entries of $U$ are real, where $^T$ stands for transpose ). Are there any other cases ?</p>
Ben Grossmann
81,360
<p>It's going to be true in <em>all</em> cases.</p> <p>In particular, if $U$ is unitary, then $$ (U^T)^\dagger U^T = [UU^\dagger]^T = I^T = I $$</p>
1,705,453
<p>I have a list of prime numbers which can be expressed in the form of $3x+1$. One such prime of form $3x+1$ satisfies the expression: $a^2+b^2-ab$.</p> <p>Now I am having list of prime numbers of form $3x+1$ (i.e., $7,19 \ldots$). But I am unable to find the $a$ and $b$ which satisfy the above expression.</p> <p>Thanks for your help in advance.</p>
Lutz Lehmann
115,115
<p>For $f(x)=\sin(x)$ you get the nice formula $$ f^{(k)}(x)=\sin\Bigl(x+k\frac\pi2\Bigr) $$ Thus $f^{(4)}(x)=\sin(x)$.</p> <p>In the error formula, the argument of the 4th derivative is some not known point inside the interval $(x-h,x+h)$. As a first estimate one can take the value at $x$ if $h$ is small.</p> <p>Apart from the fact that it has no place in the formula, <code>(x-2/5)</code> for $x=0.4$ evaluates to zero. The correct error estimate is, following the formula</p> <pre><code> e = sin(x)*h^2/12 </code></pre> <p>(Aside, if you do not use vectorized operations, you do not need the point modificator.)</p> <hr> <p>The evaluation of $f(x\pm h)$ deviates from its exact value by the accumulated errors of the floating point operations. First there is an error of magnitude $|x|\mu$ in the addition $x\pm h$ for $|h|\ll|x|$ which translates into an error contribution of $|f'(x)|·|x|\mu$ in the value. Then the evaluation of $f$ itself will contribute $|f(x)|·m_f\mu$ for some $m_f$ indicating the evaluation complexity of $f$ and assuming that there are no great cancellations in this evaluation algorithm of $f$ ($m_{\sin}=1$ as it is one FPU operation). The complete error bound estimate combines the floating point errors and the discretization errors to something like $$ \frac{2(m_f|f(x)|+|f'(x)x|)\mu}{h^2}+\frac{|f^{(4)}(x)|}{12}h^2 $$ The minimum of that bound is reached when both terms are equal which happens at about $h\sim \sqrt[4]\mu\simeq 10^{-4}$. Thus the best derivative value is the 4th with about 7-9 correct digits, which visual inspection confirms.</p> <p>On the other hand, as example, the 8th value has an error of $\sim 10^{-16}·(10^8)^2=1$ from the first term, the floating point errors, thus the error is larger than the expected value, giving a totally useless result.</p> <hr> <p>The exact values obey $f(x+h)=f(x)+f'(x)h+0.5f''(x)h^2+…$. For $h^2&lt;\mu$, that is, for $|h|&lt;10^{-8}$ for <code>double</code>, the second degree term falls below the threshold of rounding errors and thus plays no role in the difference formula. And since the second difference of a linear function is zero, this explains the zero results in the list of numerical results.</p>
3,403,272
<p><p> I'm currently taking abstract algebra and I'm very lost.</p> <blockquote> <p>Let <span class="math-container">$G = (\Bbb Z/18\Bbb Z, +)$</span> be a cyclic group of order <span class="math-container">$18$</span>.</p> <p>(1) Find a subgroup <span class="math-container">$H$</span> of <span class="math-container">$G$</span> with <span class="math-container">$|H|= 3.$</span></p> <p>(2) What are the elements of <span class="math-container">$G/H$</span>?</p> <p>(3) Find a familiar group that is isomorphic to <span class="math-container">$G/H$</span>.</p> </blockquote> <p><p> For one I think I understand that since it is a cyclic group we need a generator so I choose <span class="math-container">$\langle [6]\rangle$</span>. <span class="math-container">$[6]+[6]=[12]$</span> and <span class="math-container">$[6]+[6]+[6]=[18]=[0]$</span> so <span class="math-container">$H=\langle [6]\rangle=\{[0],[6],[12]\}$</span>. Here we see <span class="math-container">$18$</span> divided by <span class="math-container">$6$</span> is <span class="math-container">$3$</span> so <span class="math-container">$|H| = 3.$</span> <p> The next part are the elements <span class="math-container">$G/H$</span> just the subgroup I wrote down before? <p> The last question is confusing me the most. In order to be isomorphic to one another the group that I select must have three elements as well, correct? The problem is there is no other subgroup of <span class="math-container">$G$</span> that has an order <span class="math-container">$3$</span>.</p>
Shaun
104,041
<p>Here <span class="math-container">$\lvert G/H\rvert=\lvert G\rvert/\lvert H\rvert=18/3=6$</span>.</p> <p>Also, <span class="math-container">$G\cong\langle a\mid a^{18}\rangle,$</span> so <span class="math-container">$$\begin{align} G/H&amp;\cong\langle b\mid b^{18/3}\rangle \\ &amp;\cong \langle b\mid b^6\rangle \\ &amp;\cong \Bbb Z_6. \end{align}$$</span></p>
2,722,609
<p>In a past thread it was mentioned that $x \in A$ is a predicate. I know $\exists x$ and $\forall x$ are quantifiers but are they also predicates themselves? What about when combined with "in" itself (or whatever this operator is called)? e.g. $\exists x \in A$ or $\forall x \in A$</p>
goblin GONE
42,339
<p>Yes and no.</p> <p>Let $X$ denote a set. A predicate on $X$ is basically a subset of $X$. A quantifier on $X$ is basically a collection of subsets of $X$. In particular, we can think of the existential quantifier as the collection of all non-empty subsets of $X$, and we can think of the universal quantifier as the collection $\{X\}$ with exactly one element, namely the predicate that is true everywhere. Under this interpretation, the meaning of $\forall x \in X: P(x)$ is roughly $P \in \forall_X$, which is equivalent to $P \in \{X\}$, which is equivalent to $P = X$, which is just saying that $P$ is true everywhere. Similarly, the meaning of $\exists x \in X:P(x)$ is roughly $P \in \exists_X$, where $\exists_X$ is the set of all predicates on $X$ that return True for at least one input. Of course, there's some extra subtleties when extra variables are around. For example, $\forall x \in X: P(x,y)$ means approximately $(x \mapsto P(x,y)) \in \forall_X$.</p> <p>In light of all this, I tend to think that predicates and quantifiers are just different, and that in particular, neither generalizes the other. On the other hand, note that a quantifier on a set $X$ is just a predicate on the powerset of $X$. In some sense, every quantifier is a predicate, but just not on the same set.</p>
1,618,699
<p>Let A be a non-empty subset of $\mathbb{R}$ that is bounded above and put $s=\sup A$<br> Show that if $s\notin A$ the the set $A\cap (s-ε,s)$ is infinite for any $ε&gt;0$ </p> <p>This has to be solved using contradiction, by supposing $A\cap (s-ε,s)$ is an finite set. But I am not sure how to proceed after this.</p>
Christian Blatter
1,303
<p>Formulas like $$1+2+3+4+\ldots=-{1\over12}\tag{1}$$ are peddled even in the $21^{\rm st}$ century to impress simple minded people, and have per se no mathematical content whatsoever. It is true that one could replace the sum $\sum_{k=1}^\infty k$ on the left hand side of $(1)$ by a sum of the form $$\sum_{k=1}^\infty k\&gt;\phi_k(z)$$ with certain suitably chosen functions $\phi_k(z)$ such that $(2)$ converges for some $z$, and then one could perform some limiting process $z\to\zeta$ whereby $\phi_k(z)\to 1$ for all $k\geq1$, and the sum in question converges to $-{1\over12}$. But none of this is exhibited in the crank formula $(1)$, and trumpeting about "analytical continuation" makes it no better. If all of this would not have the smell of RH nobody would seriously consider it.</p>
4,146,629
<p>I'm reading D. E. Knuth's book &quot;Surreal Numbers&quot;. And I'm completely stuck in chap. 6 (The Third Day) because there is a proof I don't understand. Alice says</p> <blockquote> <p>Suppose at the end of <span class="math-container">$n$</span> days, the numbers are <span class="math-container">$$x_1&lt;x_2&lt;\dots&lt;x_m$$</span></p> </blockquote> <p>She demonstrates that <span class="math-container">$x_i \equiv (\{x_{i-1}\},\{x_{i+1}\})$</span> and she begins the proof by saying</p> <blockquote> <p>Look, each element of <span class="math-container">$X_{iL}$</span> is <span class="math-container">$\le x_{i-1}$</span>, and each element of <span class="math-container">$X_{iR}$</span> is <span class="math-container">$\ge x_{i+1}$</span>.</p> </blockquote> <p>That first step of the proof is the one I don't understand. Can someone show me how to demonstrate that statement?</p>
mjqxxxx
5,546
<p>Conway's second rule says that</p> <blockquote> <p>One number is less than or equal to another number if and only if no member of the first number's left set is greater than or equal to the second number, and [other stuff].</p> </blockquote> <p>So <span class="math-container">$x_i=x_i$</span> implies that no member of <span class="math-container">$X_{iL}$</span> is <span class="math-container">$\ge x_i$</span>; hence every member of <span class="math-container">$X_{iL}$</span> is strictly less than <span class="math-container">$x_i$</span>. With the additional assumption that the only numbers created so far are <span class="math-container">$x_1 &lt; x_2 &lt; \ldots &lt;x_m$</span>, this means that every member of <span class="math-container">$X_{iL}$</span> is <span class="math-container">$\le x_{i-1}$</span>. The proof that every member of <span class="math-container">$X_{iR}$</span> is <span class="math-container">$\ge x_{i+1}$</span> is exactly analogous.</p>
50,113
<p>What are some good books on field and Galois theory?</p>
Chris Godsil
1,266
<p>David Cox "Galois Theory" Wiley 2004 is my current favorite. Lots of interesting material and very nicely written.</p>
50,113
<p>What are some good books on field and Galois theory?</p>
Ian Agol
1,345
<p>Chapter 1.5 The Absolute Galois Group of a Finite Field Might be useful from <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2445111" rel="nofollow noreferrer">Field Arithmetic</a> by Fried and Jarden.</p>
2,946,379
<p>The question posed is the following: Let <span class="math-container">$X$</span> be a Banach Space and let <span class="math-container">$T:X\to X$</span> be a Lipschitz-Continuous map. Show that, for <span class="math-container">$\mu$</span> sufficiently large, the equation <span class="math-container">\begin{equation} Tx+\mu x=y \end{equation}</span> has, for any <span class="math-container">$y\in X$</span>, a unique solution.</p> <p>Note that <span class="math-container">$x,y$</span> are vectors, since our book (<em>Mathematical Analysis</em> by Mariano Giaquinta and Giuseppe Modica) generally ignores vector indicators, since it's all multivariable.</p> <p>My proof is based on the Banach Fixed Point Theorem: Since <span class="math-container">$T$</span> is Lipschitz-continuous, we have <span class="math-container">$\|Tx\|\leq k\|x\|$</span> for <span class="math-container">$0&lt;k\leq1$</span>. So <span class="math-container">$\|Tx-\mu x\|\leq k\|x\| - \mu \|x\|$</span>.</p> <p>Then we can say</p> <p><span class="math-container">\begin{equation} \|Tx-\mu x\|\leq (k-\mu)\|x\| \end{equation}</span></p> <p>So, if <span class="math-container">$\mu$</span> is large enough that <span class="math-container">$|k-\mu|&lt;1$</span>, we have a contractive map, and by the Banach Fixed Point theorem, there exists a unique fixed point <span class="math-container">$x_0$</span> for <span class="math-container">$(T-\mu)x$</span>. Then, <span class="math-container">$Tx-\mu x=y$</span> has a unique solution, namely, <span class="math-container">$x_0$</span>.</p> <p>My question is whether this is a valid proof. I'm mostly foggy on if I applied the theorem correctly, and if I am allowed to say <span class="math-container">$Tx-\mu x=(T-\mu)x$</span>, since <span class="math-container">$T$</span> is a map and <span class="math-container">$\mu$</span> is a constant (I think).</p>
Bernard
202,857
<p>If you make the substitution <span class="math-container">$\;t=\mathrm e^x\iff x=\ln t$</span>, so that <span class="math-container">$\;\mathrm dx=\dfrac{\mathrm d t}t$</span>, we obtain <span class="math-container">$$\int_{1}^{\infty}\frac{\mathrm e^{x}+\mathrm e^{3x}}{\mathrm e^{x}-\mathrm e^{5x}}\,\mathrm dx=\int_{\mathrm e}^{\infty}\frac{t+t^3}{t-t^5}\dfrac{\mathrm d t}t = \int_{\mathrm e}^{\infty}\frac{1+t^2}{t-t^5}\,\mathrm d t.$$</span> Now you can prove the convergence using the comparison test: near <span class="math-container">$+\infty$</span> the integrand has a simple equivalent: <span class="math-container">$$\frac{1+t^2}{t-t^5}\sim_{+\infty}\frac{t^2}{-t^5}=-\frac1{t^3},$$</span> which has a convergent integral on the same inteval.</p>
4,188,020
<p>I know that there exists a connection on a principal bundle and via parallel transport it is possible to define a a covariant derivative on the associated bundle.</p> <p>However, can we also define a covariant derivative on the principal bundle. I.e. something that can differentiate a section along a vector field? Or do we need a linear structure like the one in a vector bundle to 'take derivatives'?</p>
Mozibur Ullah
26,254
<blockquote> <p>I know that there exists a connection on a principal bundle and via parallel transport it is possible to define a a covariant derivative on the associated bundle.</p> </blockquote> <p>Ehresmann connections are the geometric version of connections. They are generally available on all fibre bundles and not just principal bundles. Let <span class="math-container">$p:E \rightarrow M$</span> be a fibre bundle over the base <span class="math-container">$M$</span>. Then an Ehresmann connection is a splitting of <span class="math-container">$TE$</span> into 'horizontal' and 'vertical' bundles. The vertical bundle is given as <span class="math-container">$VE:=ker(Tp)$</span>. Thus the splitting is reduced to a choice of horizontal bundle <span class="math-container">$HE$</span> which is complementary to the vertical bundle, ie <span class="math-container">$TE=HE\oplus VE$</span>. This can be encoded into a projection <span class="math-container">$C:TE \rightarrow TE$</span> whose image is exactly <span class="math-container">$VE$</span> and whose kernel is <span class="math-container">$HE$</span>. It also means that <span class="math-container">$C \in \Omega^1(M,TM)$</span>. This is what is called a tangent bundle valued 1-form.</p> <p>Now it is often stated that a connection is not covariant, for example see the many general relativity texts. And this, whilst true, is not the whole picture. The connection is not covariant when it is considered to live on the base manifold. It is, however covariant, as the construction above manifestly shows, when it is where it should be on, on the total space. When we specialise the above construction for the tangent bundle this means a connection lives on the second tangent bundle.</p> <p>Secondly, parallel transport is more or less the same as an Ehresmann connection. We don't use parallel transport to 'transport' our connection from the principal bundle to an associated bundle, but through <em>inducing</em> it on the associated bundle. This relies on the action of the gauge structure group on a standard fibre, in this case, this will be a vector space and so a representation of the gauge structure group.</p> <p>I'm using the term gauge structure group for what is called the structure group in the mathematical literature and what is known as the gauge group in the physics literature. It seems useful to have such a term to show the parallels between the two in two different languages but also to disambiguate since the gauge group is also, confusingly, used for a different but related and much larger group.</p> <blockquote> <p>However, can we also define a covariant derivative on the principal bundle. I.e. something that can differentiate a section along a vector field? Or do we need a linear structure like the one in a vector bundle to 'take derivatives'?</p> </blockquote> <p>Yes, you can. It is called the covariant exterior derivative. There is one on the principal bundle and also on any induced associated bundle and they are connected through a canonical isomorphism. Fix a connection <span class="math-container">$C$</span> as above on a principal bundle <span class="math-container">$P$</span> and define <span class="math-container">$C^*: \Omega(P,V) \rightarrow \Omega(P,V)$</span> by <span class="math-container">$(C^*\alpha)_p.(v_1,...,v_k) \rightarrow \alpha_p(Cv_1,...,Cv_k)$</span>. Here <span class="math-container">$C^*$</span> is a projection onto the space of horizontal forms which is actually independent of the connection (it is the counterpart of the vertical bundle defined above, but because it is in the cotangent bundle, it is now horizontal). We now define the covariant exterior derivative:</p> <blockquote> <p><span class="math-container">$\nabla^C:\Omega^k(P,V) \rightarrow \Omega^{k+1}(P,V)$</span></p> </blockquote> <blockquote> <p>by <span class="math-container">$\nabla^C:=C^* \circ d$</span>.</p> </blockquote> <p>Now fix a representation of <span class="math-container">$G$</span> on a vector space <span class="math-container">$V$</span>. Then there is a canonical isomorphism:</p> <blockquote> <p><span class="math-container">$q^{\sharp}:\Omega(M,P[V] \rightarrow \Omega_h(P)^G$</span></p> </blockquote> <p>We also have a covariant exterior derivative <span class="math-container">$\nabla^V_C$</span> induced on the associated bundle <span class="math-container">$P[V]$</span> and they are connected as:</p> <blockquote> <p><span class="math-container">$q^{\sharp} \circ \nabla_C = \nabla^V_C \circ q^{\sharp}$</span></p> </blockquote> <p>The details are in section 11 of Kolar, Michor &amp; Slovak's <em>Natural Operations in Differential Geometry</em>.</p>
359,742
<p>I have a mathematical problem that leads me to a particular necessity. I need to calculate the convolution of a function for itself for a certain amount of times. </p> <p>So consider a generic function $f : \mathbb{R} \mapsto \mathbb{R}$ and consider these hypothesis:</p> <ul> <li>$f$ is continuos in $\mathbb{R}$.</li> <li>$f$ is bound, so: $\exists A \in \mathbb{R} : |f(x)| \leq A, \forall x \in \mathbb{R}$.</li> <li>$f$ is integral-defined, so its area is a real number: $\exists \int_a^bf(x)\mathrm{d}x &lt; \infty, \forall a,b \in \mathbb{R}$. Which implies that such a function at ifinite tends to zero.</li> </ul> <p><strong>Probability mass functions:</strong> Such functions fit the constraints given before. So it might get easier for you to consider $f$ also like the pmf of some continuos r.v.</p> <p>Consider the convolution operation: $a(x) \ast b(x) = c(x)$. I name the variable always $x$.</p> <p>Consider now the following function:</p> <p>$$ F^{(n)}(x) = f(x) \ast f(x) \ast \dots \ast f(x), \text{for n times} $$</p> <p>I want to evaluate $F^{(\infty)}(x)$. And I would like to know whether there is a generic final result given a function like $f$.</p> <h3>My trials</h3> <p>I tried a little in Mathematica using the Gaussian distribution. What happens is that, as $n$ increases, the bell stretches and its peak always gets lower and lower until the function almost lies all over the x axis. It seems like $F^{(\infty)}(x)$ tends to $y=0$ function...</p> <p><img src="https://i.stack.imgur.com/8FDFH.png" alt="Trials in Mathematica"></p> <p>As $n$ increases, the curves gets lower and lower. </p>
SSF
131,617
<p>Well for your trials with Gaussians you're only rescaling them. My guess would be that it will tend to an infinitely stretched Gaussian regardless of the initial function. It least it is equivalent to having a sum of $n \to \infty$ random variables with the same probability density function $f$, which should tend to normal distribution stretched by the factor of $n$. </p>
37,013
<p><strong>Question:</strong> What are some interesting or useful applications of the Hahn-Banach theorem(s)?</p> <p><strong>Motivation:</strong> Most of the time, I dislike most of Analysis. During a final examination, a question sparked my interest in the Hahn-Banach theorem(s). One of my favorite things to do is to write a math blog (mlog?) post about various topics so that I can better understand them, but I know very little about Hahn-Banach and a quick google search didn't seem to point to anything neat. I was interested in seeing what you all liked (if anything!) about the Hahn-Banach Theorems. </p> <p>Also, I can't seem to make this a community wiki, but I think it ought to be one. If someone could either fix this, I would appreciate it! (If not, please delete this!)</p>
ncmathsadist
4,154
<p>One I know of is the hyperplane separation theorem for convex sets. Another is the existence of Banach generalized limits.</p>
37,013
<p><strong>Question:</strong> What are some interesting or useful applications of the Hahn-Banach theorem(s)?</p> <p><strong>Motivation:</strong> Most of the time, I dislike most of Analysis. During a final examination, a question sparked my interest in the Hahn-Banach theorem(s). One of my favorite things to do is to write a math blog (mlog?) post about various topics so that I can better understand them, but I know very little about Hahn-Banach and a quick google search didn't seem to point to anything neat. I was interested in seeing what you all liked (if anything!) about the Hahn-Banach Theorems. </p> <p>Also, I can't seem to make this a community wiki, but I think it ought to be one. If someone could either fix this, I would appreciate it! (If not, please delete this!)</p>
user248793
248,793
<p>What i know about Hann Banach Theorem is the existence of enough functionals of a dual space on a given space, and these functionals sepearate points of the space. The sufficiency of these functional guaranteed enough maps in a dual space to work with. </p>
2,080,716
<p>I have the quadratic form $$Q(x)=x_1^2+2x_1x_4+x_2^2 +2x_2x_3+2x_3^2+2x_3x_4+2x_4^2$$</p> <p>I want to diagonalize the matrix of Q. I know I need to find the matrix of the associated bilinear form but I am unsure on how to do this.</p>
deshu
403,660
<p>If you can use programmatic approach to answer your questions, which I personally find way easier than hand-written-math, you can check out that simple <strong><a href="https://jsfiddle.net/deshu/kzx4rxa3/6/" rel="nofollow noreferrer">JavaScript fiddle</a></strong> I wrote for you.</p> <p><strong>In the right bottom corner you can find answers for you questions.</strong></p> <p>You can easily change code to, for example, find all numbers in range 100-1000 if you need to.<br> In that case, you would have to change line:</p> <pre><code>for (var i = 1; i &lt;= 100; i++) { </code></pre> <p>to:</p> <pre><code>for (var i = 100; i &lt;= 1000; i++) { </code></pre> <p><br> Every time you want your output to update, you have to click <strong>Run</strong> button at the top.</p> <p>I'd gladly answer to any questions about code, if you have any.</p> <p><em>Remember that the bottom right window won't wrap text, so the horizontal scrolling is possible.</em></p>
4,359,372
<p>My question is: Does there exist <span class="math-container">$x_n$</span> (<span class="math-container">$n\geq 0$</span>) such that <span class="math-container">$x_n$</span> is a bounded and divergent sequence with <span class="math-container">$$x_{n+m}\leq (x_n+x_m)/2$$</span> for all <span class="math-container">$n,m\geq 0$</span>?</p> <p>I'm guessing that such an example does exist but can't seem to find an example.</p> <p>Since we require <span class="math-container">$x_n$</span> to be bounded, we can't have something with <span class="math-container">$\lim_{n\to\infty} x_n\to\pm \infty$</span>, so it has to look something like <span class="math-container">$x_n=(-1)^n$</span>, although this doesn't work since <span class="math-container">$$(-1)^{1+1}=1\not\leq ((-1)^1+(-1)^1)/2=-1$$</span></p> <p>Assuming that such a sequence can't be found, I'm not sure how I'd prove it either.</p>
bjcolby15
122,251
<p>Using base two (binary) as the weight, i.e. <span class="math-container">$1 = 2^0, 2 = 2^1, 4 = 2^2, 8 = 2^3, 16 = 2^4, 32 = 2^5$</span>, we can measure any weights we want. With <span class="math-container">$5$</span> weights, you can only weigh up to <span class="math-container">$31$</span>g, but with <span class="math-container">$6$</span> weights, you can measure up to <span class="math-container">$63$</span>g. Hence the minimum number of weights you need to measure up to <span class="math-container">$35$</span>g is <span class="math-container">$6.$</span></p>
1,545,929
<p>Assume $0&lt;\alpha\leq 1$ and $x&gt;0$. Does the following inequality hold? $$(1-e^{-x})^{\alpha}\leq (1-\alpha e^{-x})$$ I know that the reverse inequality holds if $\alpha\ge 1$.</p>
Justpassingby
293,332
<p>The concept that you are thinking of is <em>sequence continuity</em> (which is equivalent to continuity in metric spaces such as R^n but that requires proof).</p> <p>If you check the condition "for every given open set containing the image there exists an open set (containing the original point) that is mapped into the given open set" then you have continuity by definition. An open set includes its points from all directions at once.</p> <p>The typical procedure (in metric spaces) is to use open balls centered at the given original point or its image: for every positive epsilon there exists a positive delta such that the open ball B(p,delta) is mapped inside B(f(p),epsilon).</p>
308,520
<p>The DE is $y' = -y + ty^{\frac{1}{2}}$. </p> <p>$2 \le t \le 3$</p> <p>$y(2) = 2$</p> <p>I tried to see if it was in the <a href="http://www.sosmath.com/diffeq/first/lineareq/lineareq.html" rel="nofollow">linear form</a>. I got:</p> <p>$$\frac{dy}{dt} + y = ty^{\frac{1}{2}}$$</p> <p>The RHS was not a function of <code>t</code>. I also tried separation of variables, but I couldn't isolate the <code>y</code> from the term $ty^{\frac{1}{2}}$. Any hints?</p>
Mike
17,976
<p>If that $y^{\frac12}$ weren't there, you might solve the problem by multiplying by an integrating factor of $e^t$ to yield $(e^ty)'$ on the left side. Try making the substitution</p> <p>$$z=e^ty$$ $$y^{\frac12}=z^{\frac12}e^{-\frac t2}$$ $$e^ty'+e^ty=(e^ty)'=te^ty^{\frac12}$$ $$z'=te^t(z^{\frac12}e^{-\frac t2})=te^{\frac t2}z^{\frac12}$$</p> <p>This equation is now separable.</p>
490,064
<p>Solve the Cauchy problem, $\forall t \in \mathbb{R}$, $$ \begin{cases} u''(t) + u(t) = |t|\\ u(0)=1, \quad u'(0) = -1 \end{cases} $$</p> <p>The solution to the homogeneous equation is $A\cos(t) + B \sin(t)$. Empirically, $|t|$ is "more or less" a particular solution, however it is not differentiable in $0$... What is the fastest way to find a particular solution two times differentiable?</p>
Felix Marin
85,343
<p>$\displaystyle{\xi = {\rm u}' + {\rm iu} \Longrightarrow \xi' = {\rm u}'' + {\rm iu}' \Longrightarrow \xi' - {\rm i}\xi = \left\vert t\right\vert\,; \qquad {\rm u} = \Im\xi}$</p> <p>$$ {{\rm d}\left({\rm e}^{-{\rm i}t}\xi\right) \over {\rm d}t} = {\rm e}^{-{\rm i}t}\,\left\vert t\right\vert \Longrightarrow {\rm e}^{-{\rm i}t}\xi - \left(-1 + {\rm i}\right) = \int_{0}^{t}{\rm e}^{-{\rm i}t'}\,\left\vert t'\right\vert\,{\rm d}t' $$</p> <p>$$ \xi = -{\rm e}^{{\rm i}t} + {\rm i}{\rm e}^{{\rm i}t} + \int_{0}^{t} {\rm e}^{{\rm i}\left(t - t'\right)}\,\left\vert t'\right\vert\,{\rm d}t' $$</p> <p>$$ \begin{array}{|c|}\hline\\ \\ \color{#ff0000}{\large\quad{\rm u}\left(t\right) = -\sin\left(t\right) + \cos\left(t\right) + \int_{0}^{t}\sin\left(t - t'\right)\left\vert t'\right\vert\,{\rm d}t'\quad} \\ \\ \hline \end{array} $$</p> <p>${\bf ADDENDUM:}$ \begin{align} \int_{0}^{t}\sin\left(t - t'\right)\left\vert t'\right\vert\,{\rm d}t' &amp;= \left.\vphantom{\LARGE A} \cos\left(t - t'\right)\left\vert t'\right\vert\, \right\vert_{t'\ =\ 0}^{t'\ = t} - \int_{0}^{t}\cos\left(t - t'\right){\rm sgn}\left(t'\right),{\rm d}t' \\[3mm]&amp;= \left\vert t\right\vert + \left.\vphantom{\LARGE A} \sin\left(t - t'\right){\rm sgn}\left(t'\right) \right\vert_{t'\ =\ 0}^{t'\ = t} - \int_{0}^{t}\sin\left(t - t'\right) \left\lbrack 2\delta\left(t'\right)\right\rbrack\,{\rm d}t' \\[3mm]&amp;= \left\vert t\right\vert - 2\sin\left(t\right)\Theta\left(t\right) \end{align} $$ \begin{array}{|rcl|}\hline\\ \\ \color{#ff0000}{\large\quad{\rm u}\left(t\right)} &amp; = &amp; \color{#ff0000}{\large-\left\lbrack 2\Theta\left(t\right) + 1\right\rbrack\sin\left(t\right) + \cos\left(t\right) + \left\vert t\right\vert} \\[3mm] \color{#ff0000}{\large\quad{\rm u}'\left(t\right)} &amp; = &amp; \color{#ff0000}{\large-\left\lbrack 2\Theta\left(t\right) + 1\right\rbrack\cos\left(t\right) - \sin\left(t\right) + {\rm sgn}\left(t\right)\quad} \\[3mm] \color{#ff0000}{\large\quad{\rm u}''\left(t\right)} &amp; = &amp; \color{#ff0000}{\large\phantom{-}\left\lbrack 2\Theta\left(t\right) + 1\right\rbrack\sin\left(t\right) - \cos\left(t\right)} \\ \\ \hline \end{array} $$</p>
490,064
<p>Solve the Cauchy problem, $\forall t \in \mathbb{R}$, $$ \begin{cases} u''(t) + u(t) = |t|\\ u(0)=1, \quad u'(0) = -1 \end{cases} $$</p> <p>The solution to the homogeneous equation is $A\cos(t) + B \sin(t)$. Empirically, $|t|$ is "more or less" a particular solution, however it is not differentiable in $0$... What is the fastest way to find a particular solution two times differentiable?</p>
Mikasa
8,581
<p>If $t\in 0^+\cup\{0\}$ then we have $u''+u=t$. Here, you can use any appropriate methods to get the general solution for this ODE. For example by using undetermined coefficients you get: $$u(t)=C_1\sin t+C_2\cos t+t$$ Remember the part $C_1\sin t+C_2\cos t$ is really the solution of the associated homogenous OE, $u''+u=0$. Similarly, if $t&lt;0$, then we have $u''+u=-t$ and so we get $u(t)=C_3\sin t+C_4\cos t-t$. Therefore: $$u(t)=\left\{ \begin{array}{ll} C_1\sin t+C_2\cos t+t &amp; \quad t \geq 0 \\ C_3\sin t+C_4\cos t-t &amp; \quad t &lt; 0 \end{array} \right.$$ Now use other given conditions to find the unknown coefficients $C_i$. Note that the solution should be continuous at the origin,i.e; $\lim_{t\to 0}u(t)=u(0)$.</p>
3,536,671
<p>I have the following mathematical operations to use: Add, Divide, Minimum, Minus, Modulo, Multiply and Round.</p> <p>With these I need to get a number, run it through a combination of these and return 0 if the number is negative or equal to 0 and the number itself if the number is greater than 0.</p> <p>Is that possible?</p> <p>EDIT: Minus is Subtract</p>
MPW
113,214
<p>You really want <span class="math-container">$\max\{x,0\}$</span>, which can be realized as <span class="math-container">$\boxed{-\min\{-x,0\}}$</span>.</p> <p>In your precise language, <span class="math-container">$\operatorname{Minus}(\operatorname{Min}(\operatorname{Minus}(x),0))$</span>.</p> <p>(I assume that your "Minus" is unary minus, not "Subtract". If it really is "Subtract", you can produce unary minus by Subtract(<span class="math-container">$0,x$</span>).)</p>
4,280,426
<blockquote> <p>We have a bag with <span class="math-container">$3$</span> black balls and <span class="math-container">$5$</span> white balls. What is the probability of picking out two white balls if at least one of them is white?</p> </blockquote> <p>If <span class="math-container">$A$</span> is the event of first ball being white and <span class="math-container">$B$</span> the second ball being white, could it be <span class="math-container">$p\bigl((A|B)\cup(B|A)\bigr)$</span>? Although <span class="math-container">$B$</span> depends on <span class="math-container">$A$</span>, I don't understand why <span class="math-container">$A$</span> depends on <span class="math-container">$B$</span>, as <span class="math-container">$B$</span> occurs after <span class="math-container">$A$</span> has occurred.</p> <p>Thank you very much for your help.</p> <p>Edit: and the probability of obtaining two white balls if I have only one white (regardless if it’s the first or the second one)? Thank you very much for your help!</p>
Mark
470,733
<p>The closure has to contain <span class="math-container">$A$</span>, so there are only two options here: <span class="math-container">$\mathbb{R}$</span> or <span class="math-container">$\mathbb{R}\setminus\{p\}$</span>. Since <span class="math-container">$\mathbb{R}\setminus\{p\}$</span> is not a closed set (its complement is clearly not open in this topology), it can't be the closure. So it has to be <span class="math-container">$\mathbb{R}$</span>.</p>
2,511,095
<p>Let $p$ be an odd prime. We know that the polynomial $x^{p-1}-1$ splits into linear factors modulo $p$. If $p$ is of the form $4k+1$ then we can write $$x^{p-1}-1=x^{4k}-1=(x^{2k}+1)(x^{2k}-1).$$ The theorem of Lagrange tells us that any polynomial congruence of degree $n$ mod $p$ has at most $n$ solutions. Hence we can deduce from this factorization that $-1$ is a quadratic residue modulo $p$. Similarly if $p$ is of the form $3k+1$ we can write $4(x^{p-1}-1)=4(x^{3k}-1)=(x^k-1)((2x^{k}+1)^2+3)$ and deduce that $-3$ is a quadratic residue mod $p$. </p> <p><strong>Can we prove in this fashion that $-2$ is a quadratic residue mod $p$ if $p$ is of the form $8k+1$ or $8k+3$?</strong></p> <p>Note that I am interested only in this specific method. I know how to prove this using different means.</p>
gt6989b
16,192
<p>Here is one approach. Note from the second equation you have $$ 3a^2 = 5/b - 13b^2 $$ which we can substitute into the first to get $$ 18 = a\left(a^2+39b^2\right) = a\left(\frac{5}{3b} - \frac{13b^2}{3}+39b^2\right) $$ so $$ 3a^2 = \left(\frac{18\cdot 3}{\frac{5}{3b} - \frac{13b^2}{3}+39b^2} \right)^2 $$ which implies an equation in $b$, $$ \frac{5}{b} - 13b^2 = \left(\frac{18\cdot 3}{\frac{5}{3b} - \frac{13b^2}{3}+39b^2} \right)^2, $$ since both sides are equal to $3a^2$.</p>
3,008,162
<p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be well-ordered sets, and suppose <span class="math-container">$f:A\to B$</span> is an order-reversing function. Prove that the image of <span class="math-container">$f$</span> is finite.</p> <p>I started by supposing not. Then we must have that the image of <span class="math-container">$f$</span>, or the set <span class="math-container">$\{f(x)\in B:x\in A\}$</span>, has infinite cardinality. If this is the case the we must have that <span class="math-container">$\vert{\{f(x)\in B:x\in A\}}\vert\geq \aleph_0$</span> which also means there exists a strictly order-preserving function <span class="math-container">$g:\mathbb{N}\to \{f(x)\in B:x\in A\}$</span>. </p> <p>The contradiction I am trying to reach is that this would imply that there exists an order-reversing function from <span class="math-container">$\mathbb{N}$</span> to an infinite image which is a subset of a well-ordered set which can't happen but I don't know how to close the gap in the argument. </p>
Arthur
15,500
<p>Hint: The image of <span class="math-container">$g$</span> is a non-empty subset of a well-ordered set. Therefore it has a minimal element.</p>
300,163
<p>I need to integrate the $z/\bar z$ (where $\bar z$ is the conjugate of $z$) counterclockwise in the upper half ($y&gt;0$) of a donut-shaped ring. The outer circle is $|z|=4$ and the inner circle is $|z|=2$. </p> <p><strong>My method:</strong></p> <p>$z/\bar z = e^{2i\theta}$ - which is entire over the complex plane. So with respect to $d\theta$, we get the integral $re^{i3\theta} d\theta$ which, we can then evaluate at r=4 (from pi to 0) and r=2 from (0 to pi)</p> <p><strong>Two questions:</strong></p> <p>1) As integrating in the counterclockwise direction, surely I shouldn't be getting a negative number?</p> <p>2) Via the deformation theorem, as the function is holomorphic on both circles and the region between them, should I not be getting 0? </p>
Sapph
61,818
<p>@rlgordonma Thank you for your help! Just a quick question: I do the same integration as you but I seem to end up integrating rie^3itheta rather than just re^3itheta (as dz=d(re^i3theta)=rie^i3theta dtheta)</p> <p>Which is right?</p>
688,430
<blockquote> <p>How can I show the following $$a^n|b^n \Rightarrow a|b$$ with $a,b$ integers.</p> </blockquote> <p>$$a^n|b^n \Rightarrow b^n=m \cdot a^n \Rightarrow b^n=(m\cdot a^{n-1}) \cdot a\qquad(1)$$ How can I continue? Do I maybe have to suppose the opposite and arrive at contradiction? $$\text{So } a \nmid b \Rightarrow b=q\cdot a+r$$ Replacing this at the relation $(1)$ could I conlude to something to get a contradiction? Or is there an other way to prove this??</p>
Arthur
15,500
<p>Assuming, for contradiction, that $a \nmid b$, there must be <em>some</em> prime $p$ to some power $m$ such that $p^m | a$, but $p^m \nmid b$. Then $p^{nm} | a^n$, but $p^{nm}\nmid b$ by simple counting of prime factors.</p>
1,325,432
<p>$f(x) = x$ , $f(x+2\pi) = f(x) $ on $ [-\pi , \pi] $ </p> <p>How do I know that this function is even or odd? My book says odd, but I don't understand how to work this out? </p> <p>also why does $a_0 = 0$ and $a_n = 0$? </p> <p>since its an odd function I thought we use the even extension? </p> <p>i.e $$ a_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos(nx)dx $$</p> <p>but the answer is </p> <p>$$ b_{n} = \frac{-2}{n}\cos(n\pi) = \frac{2(-1)^{n+1}}{n} $$</p>
Mark Viola
218,419
<p>The integral of an odd function over symmetric limits is zero. To see this, we observe</p> <p>$$\begin{align} \int_{-a}^af(x)dx&amp;=\int_{-a}^0f(x)dx+\int_{0}^af(x)dx\tag1\\\\ &amp;=-\int_{0}^af(x)dx+\int_{0}^af(x)dx\tag2\\\\ &amp;=0 \end{align}$$</p> <p>where we used the substitution $x \to -x$ in going from $(1)$ to $(2)$.</p> <p>We also have that the product of an odd function and an even function is an odd function. We can show this as follows. Let $f$ be even and let $g$ be odd and let $h=fg$. Then </p> <p>$$h(-x)=f(-x)g(-x)=f(x)(-g(x))=-h(x)$$</p> <p>as was to be shown.</p> <p>For the problem at hand, note that $\cos nx$ is even about $x=0$ and $\sin nx$ is odd about $x=0$. We also note that $x$ is odd about $x=0$. Thus, we have </p> <p>$$a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}x \cos (nx)\,dx=0$$</p>
2,106,003
<p>I was just reading about the <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox">Banach–Tarski paradox</a>, and after trying to wrap my head around it for a while, it occurred to me that it is basically saying that for any set A of infinite size, it is possible to divide it into two sets B and C such that there exists some mapping of B onto A and C onto A.</p> <p>This seems to be such a blatantly obvious, intuitively self-evident fact, that I am sure I must be missing something. It wouldn't be such a big deal if it was really that simple, which means that I don't actually understand it.</p> <p>Where have I gone wrong? Is this not a correct interpretation of the paradox? Or is there something else I have missed, some assumption I made that I shouldn't have?</p>
Mathematician 42
155,917
<p>That's not what the paradox says. It says that you can take the unit ball in $\mathbb{R}^3$, divide it in certain disjoint subsets, then you can rotate and translate these subsets to obtain two unit balls. You need at least $5$ weird subsets if you want to do this 'explicitly'. The weird thing about this construction is that it seems that you somehow doubled the volume of the ball simply by cutting it into several parts.</p> <p>The simple explanation is that there was absolutely no reason to expect that the volume should be preserved under the construction, as some of the disjoint subsets are not measurable, i.e. have no volume. </p> <p>A first step to understanding the paradox is showing that it is impossible to define a meaningful measure on all subsets of $\mathbb{R}$ that is translation-invariant and such that the measure of an interval $[a,b]$ is $b-a$ (and a bunch of other desired properties). You can look up Vitali sets as an easy example of non-measurable sets. These certain subsets in the paradox are also going to be very wild, much like the Vitali sets.</p> <p>Edit: To avoid any confusion. I just want to remark that the Banach-Tarski paradox is in fact <strong>not</strong> a paradox. Mathematically speaking this construction of "the doubling of the ball" is possible.</p>
2,106,003
<p>I was just reading about the <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox">Banach–Tarski paradox</a>, and after trying to wrap my head around it for a while, it occurred to me that it is basically saying that for any set A of infinite size, it is possible to divide it into two sets B and C such that there exists some mapping of B onto A and C onto A.</p> <p>This seems to be such a blatantly obvious, intuitively self-evident fact, that I am sure I must be missing something. It wouldn't be such a big deal if it was really that simple, which means that I don't actually understand it.</p> <p>Where have I gone wrong? Is this not a correct interpretation of the paradox? Or is there something else I have missed, some assumption I made that I shouldn't have?</p>
Timothy
137,739
<p>The Banach-Tarski paradox is the theorem in ZFC that a sphere can be partitioned into finitely many subets, then those subsets can be rearranged into 2 copies of the original sphere using only translation and rotation. Actually according to ZFC, it can be partitioned into only 5 parts.</p> <p>Some people don't think you can just take on blind faith the axiom of choice. Indeed, some mathematicians work in the formal system of ZF and the axiom of choice is not a theorem of ZF. ZFC is ZF with the assumption of axiom of choice. It turns out that the Banach-Tarski paradox cannot be derived in ZF. According to <a href="https://www.emis.de/journals/RSMT/61-4/393.pdf" rel="nofollow noreferrer">https://www.emis.de/journals/RSMT/61-4/393.pdf</a>, the axiom of determinacy implies Lebesgue measurability and according to <a href="https://brilliant.org/wiki/axiom-of-determinacy/" rel="nofollow noreferrer">https://brilliant.org/wiki/axiom-of-determinacy/</a>, the axiom of determinacy is consistent with ZF.</p>
413,165
<p>I am a graduate student and I've been thinking about this fun but frustrating problem for some time. Let <span class="math-container">$d = \frac{d}{dx}$</span>, and let <span class="math-container">$f \in C^{\infty}(\mathbb{R})$</span> be such that for every real <span class="math-container">$x$</span>, <span class="math-container">$$g(x) := \lim_{n \to \infty} d^n f(x)$$</span> converges. A simple example for such an <span class="math-container">$f$</span> would be <span class="math-container">$ce^x + h(x)$</span> for any constant <span class="math-container">$c$</span> where <span class="math-container">$h(x)$</span> converges to <span class="math-container">$0$</span> everywhere under this iteration (in fact my hunch is that every such <span class="math-container">$f$</span> is of this form), eg. <span class="math-container">$h(x) = e^{x/2}$</span> or simply a polynomial, of course.</p> <p>I've been trying to show that <span class="math-container">$g$</span> is, in fact, differentiable, and thus is a fixed point of <span class="math-container">$d$</span>. Whether this is true would provide many interesting properties from a dynamical systems point of view if one can generalize to arbitrary smooth linear differential operators, although they might be too good to be true.</p> <p>Perhaps this is a known result? If so I would greatly appreciate a reference. If not, and this has a trivial counterexample I've missed, please let me know. Otherwise, I've been dealing with some tricky double limit using tricks such as in <a href="https://math.stackexchange.com/a/15257/354855">this MSE answer</a>, to no avail.</p> <p>Any help is kindly appreciated.</p> <p><span class="math-container">$\textbf{EDIT}$</span>: Here is a discussion of some nice consequences know that we now the answer is positive, which I hope can be generalized.</p> <p>Let <span class="math-container">$A$</span> be the set of fixed points of <span class="math-container">$d$</span> (in this case, just multiples of <span class="math-container">$e^x$</span> as we know), let <span class="math-container">$B$</span> be the set of functions that converge everywhere to zero under the above iteration. Let <span class="math-container">$C$</span> be the set of functions that converges to a smooth function with the above iteration. Then we have the following:</p> <p><span class="math-container">$C$</span> = <span class="math-container">$A + B = \{ g + h : g\in A, h \in B \}$</span>.</p> <p>Proof: Let <span class="math-container">$f \in C$</span>. Let <span class="math-container">$g$</span> be what <span class="math-container">$d^n f$</span> converges to. Let <span class="math-container">$h = f-g$</span>. Clearly <span class="math-container">$d^n h$</span> converges to <span class="math-container">$0$</span> since <span class="math-container">$g$</span> is fixed. Then we get <span class="math-container">$f = g+h$</span>.</p> <p>Now take any <span class="math-container">$g\in A$</span> and <span class="math-container">$h \in B$</span>, and set <span class="math-container">$f = g+h$</span>. Since <span class="math-container">$d^n h$</span> converges to <span class="math-container">$0$</span> and <span class="math-container">$g$</span> is fixed, <span class="math-container">$d^n f$</span> converges to <span class="math-container">$g$</span>, and we are done.</p> <p>Next, here I'm assuming the result of this thread holds for a general (possibly elliptic) smooth linear differential operator <span class="math-container">$d : C^\infty (\mathbb{R}) \to C^\infty (\mathbb{R}) $</span>. A first note is that fixed points of one differential operator correspond to solutions of another, i.e. of a homogeneous PDE. Explicitly, if <span class="math-container">$d_1 g = g$</span>, then setting <span class="math-container">$d_2 = d_1 - Id$</span>, we get <span class="math-container">$d_2 g = 0$</span>. This much is simple.</p> <p>So given <span class="math-container">$d$</span>, finding <span class="math-container">$A$</span> from above amounts to finding the space of solutions of a PDE. I'm hoping that one can use techniques from dynamical systems to find the set <span class="math-container">$C$</span> and thus get <span class="math-container">$A$</span> after the iterations. But I'm approaching this naively and I do not know the difficulty or complexity of such an affair.</p> <p>One thing to note is that once we find some <span class="math-container">$g \in A$</span>, we can set <span class="math-container">$h(x) = g(\varepsilon x)$</span> for small <span class="math-container">$\varepsilon$</span> and <span class="math-container">$h \in B$</span>. Conversely, given <span class="math-container">$h \in B$</span>, I'm wondering what happens when set set <span class="math-container">$f(x) = h(x/\varepsilon)$</span>, and vary <span class="math-container">$\varepsilon$</span>. It might not coincide with a fixed point of <span class="math-container">$d$</span>, but could very well coincide with a fixed point of the new operator <span class="math-container">$d^k$</span> for some <span class="math-container">$k$</span>. For example, take <span class="math-container">$h(x) = cos(x/2)$</span>. The iteration converges to 0 everywhere, and multiplying the interior variable by <span class="math-container">$2$</span> we do NOT get a fixed point of <span class="math-container">$d = \frac{d}{dx}$</span> but we do for <span class="math-container">$d^4$</span>.</p> <p>I'll leave it at this, let me know again if there is anything glaringly wrong I missed.</p>
username
40,120
<p>(Edit) A simplified shorter version of this answer is in Fedor Petrov's <a href="https://mathoverflow.net/questions/413165/does-iterating-the-derivative-infinitely-many-times-give-a-smooth-function-whene#comment1059300_413165">comments</a> (as Iosif Pinellis <a href="https://mathoverflow.net/questions/413165/does-iterating-the-derivative-infinitely-many-times-give-a-smooth-function-whene#comment1059283_413230">pointed out</a>) : Assume the sequence <span class="math-container">$f^{(n)}$</span> is dominated, that is, there exists <span class="math-container">$h$</span> locally integrable such that <span class="math-container">$\left| f^{(n)}\right|\leq h$</span> for all <span class="math-container">$n$</span>. From the FTC,</p> <p><span class="math-container">$$ f^{(n-1)}\left(t\right)-f^{(n-1)}\left(0\right)=\int_{0}^{x}f^{(n)}\left(t\right)\mathrm{d}t. $$</span> Applying the Dominated Convergence Theorem, we find <span class="math-container">$$ g(x)-g(0)=\int_0^x g(t) \mathrm{d}t $$</span> which implies that <span class="math-container">$g$</span> is smooth and of the form <span class="math-container">$g(x)=A\exp(x)$</span>, for some <span class="math-container">$A$</span>.</p> <hr /> <p>It looks like Fedor Petrov has a much nicer argument, but here is a calculus level one. Suppose that the sequence <span class="math-container">$f^{\left(n\right)}$</span> is uniformly bounded (by 10, say).</p> <p>Perform an integration by parts <span class="math-container">$$ f^{(n-1)}\left(t\right)-f^{(n-1)}\left(0\right)=\int_{0}^{x}f^{(n)}\left(t\right)\mathrm{d}t=xf^{(n)}\left(x\right)-\int_{0}^{x}tf^{\left(n\right)}\left(t\right)\mathrm{d}t. $$</span> Then apply the Dominated Convergence Theorem : <span class="math-container">$tf^{\left(n\right)}\left(t\right)\to tg(t)$</span> pointwise, and it dominated by <span class="math-container">$10\left|x\right|$</span> thus the integral converges to the integral of the limit and you obtain <span class="math-container">$$ g(x)=g(0)+xg(x)-\int_{0}^{x}xg(x)dx. $$</span></p> <p>This means that outside of <span class="math-container">$x=1$</span>, if <span class="math-container">$g$</span> is integrable, it is smooth (by bootstrapping). Taking a derivative, you find this implies <span class="math-container">$$ g^{\prime}=xg^{\prime}+g-xg, $$</span> in other words, <span class="math-container">$$ g=g^{\prime} $$</span> for <span class="math-container">$x\neq1$</span>. But <span class="math-container">$x=1$</span> is a fluke, just integrate by parts twice to get another point. So if the sequence is bounded, the only possible <span class="math-container">$g$</span> is a multiple of the exponential (which is what Fedor Petrov said without the extra assumption).</p>
413,165
<p>I am a graduate student and I've been thinking about this fun but frustrating problem for some time. Let <span class="math-container">$d = \frac{d}{dx}$</span>, and let <span class="math-container">$f \in C^{\infty}(\mathbb{R})$</span> be such that for every real <span class="math-container">$x$</span>, <span class="math-container">$$g(x) := \lim_{n \to \infty} d^n f(x)$$</span> converges. A simple example for such an <span class="math-container">$f$</span> would be <span class="math-container">$ce^x + h(x)$</span> for any constant <span class="math-container">$c$</span> where <span class="math-container">$h(x)$</span> converges to <span class="math-container">$0$</span> everywhere under this iteration (in fact my hunch is that every such <span class="math-container">$f$</span> is of this form), eg. <span class="math-container">$h(x) = e^{x/2}$</span> or simply a polynomial, of course.</p> <p>I've been trying to show that <span class="math-container">$g$</span> is, in fact, differentiable, and thus is a fixed point of <span class="math-container">$d$</span>. Whether this is true would provide many interesting properties from a dynamical systems point of view if one can generalize to arbitrary smooth linear differential operators, although they might be too good to be true.</p> <p>Perhaps this is a known result? If so I would greatly appreciate a reference. If not, and this has a trivial counterexample I've missed, please let me know. Otherwise, I've been dealing with some tricky double limit using tricks such as in <a href="https://math.stackexchange.com/a/15257/354855">this MSE answer</a>, to no avail.</p> <p>Any help is kindly appreciated.</p> <p><span class="math-container">$\textbf{EDIT}$</span>: Here is a discussion of some nice consequences know that we now the answer is positive, which I hope can be generalized.</p> <p>Let <span class="math-container">$A$</span> be the set of fixed points of <span class="math-container">$d$</span> (in this case, just multiples of <span class="math-container">$e^x$</span> as we know), let <span class="math-container">$B$</span> be the set of functions that converge everywhere to zero under the above iteration. Let <span class="math-container">$C$</span> be the set of functions that converges to a smooth function with the above iteration. Then we have the following:</p> <p><span class="math-container">$C$</span> = <span class="math-container">$A + B = \{ g + h : g\in A, h \in B \}$</span>.</p> <p>Proof: Let <span class="math-container">$f \in C$</span>. Let <span class="math-container">$g$</span> be what <span class="math-container">$d^n f$</span> converges to. Let <span class="math-container">$h = f-g$</span>. Clearly <span class="math-container">$d^n h$</span> converges to <span class="math-container">$0$</span> since <span class="math-container">$g$</span> is fixed. Then we get <span class="math-container">$f = g+h$</span>.</p> <p>Now take any <span class="math-container">$g\in A$</span> and <span class="math-container">$h \in B$</span>, and set <span class="math-container">$f = g+h$</span>. Since <span class="math-container">$d^n h$</span> converges to <span class="math-container">$0$</span> and <span class="math-container">$g$</span> is fixed, <span class="math-container">$d^n f$</span> converges to <span class="math-container">$g$</span>, and we are done.</p> <p>Next, here I'm assuming the result of this thread holds for a general (possibly elliptic) smooth linear differential operator <span class="math-container">$d : C^\infty (\mathbb{R}) \to C^\infty (\mathbb{R}) $</span>. A first note is that fixed points of one differential operator correspond to solutions of another, i.e. of a homogeneous PDE. Explicitly, if <span class="math-container">$d_1 g = g$</span>, then setting <span class="math-container">$d_2 = d_1 - Id$</span>, we get <span class="math-container">$d_2 g = 0$</span>. This much is simple.</p> <p>So given <span class="math-container">$d$</span>, finding <span class="math-container">$A$</span> from above amounts to finding the space of solutions of a PDE. I'm hoping that one can use techniques from dynamical systems to find the set <span class="math-container">$C$</span> and thus get <span class="math-container">$A$</span> after the iterations. But I'm approaching this naively and I do not know the difficulty or complexity of such an affair.</p> <p>One thing to note is that once we find some <span class="math-container">$g \in A$</span>, we can set <span class="math-container">$h(x) = g(\varepsilon x)$</span> for small <span class="math-container">$\varepsilon$</span> and <span class="math-container">$h \in B$</span>. Conversely, given <span class="math-container">$h \in B$</span>, I'm wondering what happens when set set <span class="math-container">$f(x) = h(x/\varepsilon)$</span>, and vary <span class="math-container">$\varepsilon$</span>. It might not coincide with a fixed point of <span class="math-container">$d$</span>, but could very well coincide with a fixed point of the new operator <span class="math-container">$d^k$</span> for some <span class="math-container">$k$</span>. For example, take <span class="math-container">$h(x) = cos(x/2)$</span>. The iteration converges to 0 everywhere, and multiplying the interior variable by <span class="math-container">$2$</span> we do NOT get a fixed point of <span class="math-container">$d = \frac{d}{dx}$</span> but we do for <span class="math-container">$d^4$</span>.</p> <p>I'll leave it at this, let me know again if there is anything glaringly wrong I missed.</p>
Scot Adams
103,568
<p>We consider the simplest case of an elliptic operator, namely:</p> <p>The second derivative, acting on real-valued functions defined on an interval.</p> <p>The last theorem, at the very end of</p> <p><a href="https://www-users.cse.umn.edu/%7Eadams005/PaulCussonQ/ptbddevenderivs.pdf" rel="nofollow noreferrer">https://www-users.cse.umn.edu/~adams005/PaulCussonQ/ptbddevenderivs.pdf</a></p> <p>reads:</p> <hr /> <p>THEOREM. Let <span class="math-container">$a,b\in{\mathbb R}$</span>. Assume <span class="math-container">$a&lt;b$</span>. Let <span class="math-container">$I:=(a;b)$</span>.</p> <p>Let <span class="math-container">$S:=C^\infty(I,{\mathbb R})$</span>. Define <span class="math-container">$L:S\to S$</span> by: <span class="math-container">$\forall h\in S$</span>, <span class="math-container">$Lh=h''$</span>.</p> <p>Let <span class="math-container">$f\in S$</span>. Let <span class="math-container">$g:I\to{\mathbb R}$</span>. Assume <span class="math-container">$f,Lf,L^2f,\ldots\to g$</span> pointwise on <span class="math-container">$I$</span>.</p> <p>Then: <span class="math-container">$g\in S$</span> and <span class="math-container">$Lg=g$</span>.</p> <hr /> <p>I am hopeful that the arguments can be generalized to higher dimensions, using the Maximum Principle and techniques of elliptic regularity.</p>
830,977
<p>I'm having some real trouble with lebesgue integration this evening and help is very much appreciated.</p> <p>I'm trying to show that $f(x) = \dfrac{e^x + e^{-x}}{e^{2x} + e^{-2x}}$ is integrable over $(0,\infty)$.</p> <p>My first thought was to write the integral as $f(x) = \frac{\cosh(x)}{\cosh(2x)}$ and then note $f(x) = \frac{\cosh(x)}{\sinh(x)^2 + \cosh(x)^2}$ so that $|f(x)| \le \frac{\cosh(x)}{\cosh(x)^2}$. These all seemed like sensible steps to me at this point, and I know the integral on the right hand side exists (wolfram alpha), but I'm having trouble showing it and am wondering if I have made more of a mess by introducing trigonometric functions.</p> <p>Thanks</p>
Community
-1
<p>The given function $f$ is continuous on $(0,\infty)$ and has a finite limit at $x=0$ and $$f(x)\sim_\infty e^{-x}\in L^1(0,\infty)$$ so $f$ is integrable on $(0,\infty)$.</p>
2,553,284
<p>I know that $$\ln e^2=2$$ But what about this? $$(\ln e)^2$$ A calculator gave 1. I'm really confused.</p>
Community
-1
<p>Consider the equality (assuming the operations are actually <em>defined</em> for <em><code>m</code></em> and <em><code>n</code></em>):</p> <p>$$ x =\log _nm$$</p> <p>What this means is that <em><code>x</code></em> is the number to you need to raise <em><code>n</code></em> to the power of, to get <em><code>m</code></em>. In other words:</p> <p>$$ n^x = m $$</p> <p>You probably already <em>know</em> this since your question stated:</p> <p>$$\ln e^2=2$$ and the power you need to raise <em><code>e</code></em> to, to get <em><code>e<sup>2</sup></code></em>, is two.</p> <hr> <p>In the case where <em><code>n</code></em> and <em><code>m</code></em> are the <em>same</em> number, the logarithm will always be one:</p> <p>$$ x^1 = x, \space \log_xx = 1$$ $$ e^1 = e, \space \log_ee = \ln e = 1$$</p> <p>And, of course, the reason why you're getting one can be explained with:</p> <p>$$ (\ln e)^2 = (1)^2 = 1 $$</p>
244,214
<p>One major approach to the theory of forcing is to assume that ZFC has a countable <em>transitive</em> model $M \in V$ (where $V$ is the "real" universe). In this approach, one takes a poset $\mathbb{P} \in M$, uses the fact that $M$ is <em>countable</em> to prove that there exists a generic set $G \in V$, then defines $M[G]$ as an actual set inside $V$ and proves it is a model of ZFC.</p> <p>The downside to this approach is that a countable transitive model may not exist. For example, it is possible that $V = L$ and $V$ is a minimal model of ZFC, so that any smaller model $M \in V$ is non-standard. However, if we only want a <em>countable</em> model of ZFC, there is no problem. First, Gödel's completeness theorem shows that (assuming ZFC is consistent, of course!) there is some model $M_0 \in V$ of ZFC. Then, the Löwenheim-Skolem theorem guarantees that there is a elementary substructure $M \subseteq M_0$ which is countable in $V$. So $M$ is a countable model of ZFC, and therefore, there is a generic filter $G \in V$.</p> <p>Can we continue the proof of forcing along these lines? Of course, transitivity is convenient for many reasons (such as showing that various formulas are absolute), but is it <em>possible</em> to go without it? Perhaps we would need to modify the construction of the $\mathbb{P}$-names and $M[G]$ by only considering elements that are actually in $M$. </p> <p><strong>EDIT</strong> To be a bit more clear, I believe that forcing can be done without a countable model at all, using either the syntactic approach or an approach via Boolean-valued models. My question is more humble. The arguments of forcing are very intuitive when $M$ is a countable transitive model; why don't <em>the same</em> (up to relativizing formulas to $M$) arguments work when $M$ is just countable? </p>
Bill Mitchell
109,444
<p>This is really a comment on Hamkins answer, but I'm not permitted to make comments so I'll write it as an answer.</p> <p>Using the standard Schoenfield machinery for forcing, there is no need for a well founded model. The theory of forcing is developed entirely inside the model $M$, including the definition of the class of names and the forcing relation. This uses the axiom of foundation inside $M$, but not external well foundedness. Given a externally defined generic set $G$, then, $M[G]$ is just the set of equivalence classes of names under the relation $\dot x \equiv \dot y \iff \exists p\in G ( M\vDash p\vdash \dot x = \dot y)$.</p> <p>This is, of course, essentially equivalent to Hamkins' answer.</p>
288,001
<p>Points A and B are given in Poincare disc model. Construct equilateral triangle ABC. Any kind of help is welcome.</p>
DrBaxter
66,881
<p>There is indeed a shorter approach. Euclid's Proposition I holds in Hyperbolic Geometry just as well as it holds in Euclidean Geometry.</p> <p><a href="http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI1.html" rel="nofollow">http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI1.html</a></p>
1,284,039
<p>What function satisfies $f(x)+f(−x)=f(x^2)$?</p> <p>$f(x)=0$ is obviously a solution to the above functional equation.</p> <p>We can assume f is continuous or differentiable or similar (if needed).</p>
Akiva Weinberger
166,353
<p>I'm going to put my comment as an answer.</p> <p>$\ln|1-x|$ seems to work: $$\ln|1-x|+\ln|1+x|=\ln|1-x^2|$$ Similarly, so does $\ln|1-x^3|$ (or any odd exponent).</p> <p>Also, any linear combination of these works, as you can check. Thus: $$\ln(1+x+x^2)$$ works because it's equal to $\ln|1-x^3|-\ln|1-x|$. The example of $\ln(1+x+x^2)$ is nice because it's defined, continuous, and infinitely differentiable everywhere.</p> <p>As far as I know, if you want it to be defined everywhere, continuous, and infinitely differentiable, this sort of thing is the only possible solution.</p>
3,492,435
<p>I am reading <em><strong>Foundations of Constructive Analysis</strong></em> by Errett Bishop. In the first chapter he describes a particular construction of the real numbers. There is a intermediate definition before his primary introduction of the Real numbers:</p> <blockquote> <p>A sequence <span class="math-container">${\{x_n\}}$</span> of rational numbers is regular if</p> <p><span class="math-container">$|x_m - x_n | \le m^{-1} + n^{-1}\;\;\;\;\;(m, n\in \Bbb Z^+)$</span></p> <p><em>Chapter 1 (2.1)</em></p> </blockquote> <p>What does the negative superscript mean in this definition? Since clearly you cannot take an integer to a negative power. Am I correct in interpreting <span class="math-container">$m$</span> and <span class="math-container">$n$</span> on the right hand side of the equation as the actual elements of the sequence? I am fairly sure the definition seems to parallel the Cauchy Sequence.</p>
Mohammad Riazi-Kermani
514,496
<p>There is no need for induction.</p> <p>Integration by parts will do. <span class="math-container">$$\int _0^1 x^m(1-x)^ndx =$$</span></p> <p><span class="math-container">$$(1-x)^n \frac {x^{m+1}}{m+1}|_0^1 -\int _0^1\frac {x^{m+1}}{m+1}n(1-x)^{n-1}(-1)dx=$$</span></p> <p><span class="math-container">$$\frac{n}{m+1}\int _0^1 x^{m+1}(1-x)^{n-1}dx$$</span> </p> <p>Upon substitution you get the desired result. </p>
803,335
<p>Note: this is particularly aimed at high-school/entry level college problems </p> <p>When I'm learning a new topic:</p> <p>1) I read the theory given in the textbook at the start of each topic</p> <p>2) proceed to read the solved example problems which the textbook provides (usually 3-5 with full solutions)</p> <p>3) I then proceed onto answering every question within each exercise.</p> <p>My problem is that I still forget concepts, for instance, let's say a week later (or even a few days later sometimes).</p> <p>What am I doing wrong?</p>
user21820
21,820
<p>Usually the things that we remember are those things that we both understand completely and have found useful.</p> <p>Understanding requires an intuitive grasp of why in the first place you want to consider certain mathematical objects, why a theorem should be true, and how intermediate objects were conceived. If you don't understand why someone constructed some strange object in the middle of a proof, it's impossible to have a full grasp of the underlying structure. You can know whether you fully understand a topic if you can re-derive all the theorems on your own without knowing how it was done in the textbook. Typically that means that you'll have to try to prove each theorem yourself before reading the given proof. If you get stuck, you can glance quickly through the given proof to find roughly where you have reached, and then read one or two more lines as a kind of hint so that you can try to continue by yourself. At the same time you should try to figure out what aspect of the problem you had missed in being unable to proceed. This may not be possible with some textbook proofs that pull intermediate objects out from thin air without explanation of how they got it, in which case you should ask someone familiar with the topic.</p> <p>Secondly you need to find out what it is that what you're learning is useful for. If you cannot apply it to the real world, you can still try to extract the core structure and perhaps that might appear in other more applicable areas. If you can't use what you learn, it's quite natural that you'll forget it sooner or later, even if you practise numerous times using exercises in the book.</p>
277,594
<p><a href="https://i.stack.imgur.com/yX9my.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/yX9my.gif" alt="enter image description here" /></a></p> <pre><code>Manipulate[ ParametricPlot[{Sec[t], Tan[t]}, {t, 0, u}, PlotStyle -&gt; Dashed, PerformanceGoal -&gt; &quot;Quality&quot;, Exclusions -&gt; All, PlotRange -&gt; 2], {u, 0.001, 2 Pi}] </code></pre> <p>I found that for parametric curves with singularities, using <code>ParametricPlot</code> with the dahed line style, there will be shake in some animations, is there a simple way to eliminate shake?</p>
Alexei Boulbitch
788
<p>Try the following. Here is your polynomial:</p> <pre><code>expr = Sum[ Subscript[A, i, j]*Subscript[x, i]*Subscript[x, j], {i, 1, 5}, {j, 1, 5}] </code></pre> <p><a href="https://i.stack.imgur.com/PF3TZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PF3TZ.png" alt="enter image description here" /></a></p> <p>Here is the replacement:</p> <pre><code>expr /. {Subscript[x, i_]*Subscript[x, j_] -&gt; Subscript[c, i, j], \!\( \*SubsuperscriptBox[\(x\), \(i_\), \(2\)] -&gt; \*SubscriptBox[\(c\), \(i, i\)]\)} </code></pre> <p>with the following effect</p> <p><a href="https://i.stack.imgur.com/X8N6u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X8N6u.png" alt="enter image description here" /></a></p> <p>Have fun!</p>
2,384,538
<p>I am studying Linear Algebra Done Right, chapter 2 problem 6 states:</p> <blockquote> <p>Prove that the real vector space consisting of all continuous real valued functions on the interval $[0,1]$ is infinite dimensional.</p> </blockquote> <p><strong>My solution:</strong></p> <p>Consider the sequence of functions $x, x^2, x^3, \dots$ This is a linearly independent infinite sequence of functions so clearly this space cannot have a finite basis. However this prove relies on the fact that no $x^n$ is a linear combination of the previous terms. In other words, is it possible for a polynomial of degree $n$ to be equal to a polynomial of degree less than $n$. I believe this is not possible, but does anyone know how to prove this? More specifically, could the following equation ever be true for all $x$?</p> <p>$x^n = \sum\limits_{k=1}^{n-1} a_kx^k$ where each $a_k \in \mathbb R$</p>
José Carlos Santos
446,262
<p>Then the polynomial $\displaystyle x^n-\sum_{k=0}^{n-1}a_kx^k$ would have infinitely many roots, but it can have $n$, at most.</p> <hr /> <p>Another way of dealing with this problem is based upon <em>defining</em> polynomials (in one variable $x$) as expressions of the type $a_0+a_1x+a_2x^2+\cdots+a_nx^n$, where $n\in\{0,1,2,\dots\}$ and each $a_n$ is real. Under this definition, the polynomial $a_0+a_1x+a_2x^2+\cdots+a_nx^n$ is equal to the polynomial $b_0+b_1x+a_2x^2+\cdots+b_nx^n$ if and only if the coefficients are equal, that is, if and only if $a_0=b_0$, $a_1=b_1$, and so on. Under this definition, the problem discussed here is trivial.</p> <p>What did I prove above then? Well, for each $P(x)\in\mathbb{R}[x]$, there is a corresponding <em>polynomial function</em> from $\mathbb R$ into $\mathbb R$. What I proved above is that this correspondence is one-to-one — when we are dealing with $\mathbb R$. It is still one-to-one if we are dealing with any field with charactristic $0$, such as $\mathbb Q$ or $\mathbb C$. But this is <em>not</em> true in general. For instance, if our field is $\mathbb{F}_2$, then $x$ and $x^2$ are distinct polynomials. But they correspond to the <em>same</em> polynomial function.</p>
159,446
<p>The ordinary Thom isomorphism says $H^{*+n}(E,E_{0}) \simeq H^{*}(X)$, where $E$ is a vector bundle over $X$ and $E_{0}$ is $E$ minus the zero section. Now assume that $S$ is a non vanishing section for the vector bundle $E$. In each fiber $E_{x}$ we remove two points $0_{x}$ and $S(x)$. Then we put $E_{0,1}$for the union of all 2-points punctured fibers.</p> <p><strong>Motivating by the ordinary Thom isomorphism, my question is</strong></p> <blockquote> <p>What should be a relevant right side of the following equality(equivalency):</p> </blockquote> <p>\begin{equation} H^{*+n}(E,E_{0,1}) \simeq \;? \end{equation}</p> <p>What should be a generalized Thom class?</p> <p>Does this right side depend on choosing a particular non vanishing section $S$?</p> <p>It is obvious that we can generalize the main question to multi- point punctured fibers. That is, assume we have m sections $S_{1},\ldots ,S_{m}$ such that we have m distinct vector $S_{1}(x),\ldots,S_{m}(x)$. We remove these m ponts from each fiber $E_{x}$we denote the resulting total space by $E_{1,2,\ldots m}$. We search for a relevant right side for:</p> <p>\begin{equation} H^{*+n}(E,E_{1,2, \ldots, m})=? \end{equation}</p>
Alex Degtyarev
44,953
<p>You get two copies of $H^*(X)$. In general ($k$ pairwise disjoint sections), by excision it's just like disjoint union of $k$ copies of the original bundle, hence $k$ copies of $H^*(X)$. (An extra observation is that, for one section, the result does not depend on its choice, as any section is homotopic to $0$.)</p> <p>There's no generalized Thom class: the usual one would do. You can regard it as a class in $H^n(E,E\setminus D)$, where $D$ is a disk bundle containing all the sections. This class restricts to $H^n(E,E_{\text{whatever}})$ in your notation.</p>
2,280,052
<p>Wolfram Alpha says: $$i\lim_{x \to \infty} x = i\infty$$</p> <p>I'm having a bit of trouble understanding what $i\infty$ means. In the long run, it seems that whatever gets multiplied by $\infty$ doesn't really matter. $\infty$ sort of takes over, and the magnitude of whatever is being multiplied is irrelevant. I.e., $\forall a \gt 0$:</p> <p>$$a\lim_{x \to \infty} x = \infty, -a\lim_{x \to \infty} x = -\infty$$</p> <p>What's so special about imaginary numbers? Why doesn't $\infty$ take over when it gets multiplied by $i$? Thanks.</p>
Joonas Ilmavirta
166,535
<p>Perhaps this broader view can help. (Perhaps it confuses instead!) There are several different ways to describe ways to go to infinity. I would identify these ways with different <a href="https://en.wikipedia.org/wiki/Compactification_(mathematics)" rel="nofollow noreferrer">compactifications</a>. Compactification is a formalized way of adding points at infinity.</p> <p>Let us first study the real line. The most common compactification is the two-point compactification, in which we add <span class="math-container">$+\infty$</span> and <span class="math-container">$-\infty$</span>. This means that we make a difference between infinities at the two directions, but only add one &quot;limit point&quot; at each end. Another alternative is to only add one infinity (call it <span class="math-container">$\tilde\infty$</span>), and say that <span class="math-container">$x_i\to\tilde\infty$</span> whenever <span class="math-container">$|x_i|\to\infty$</span> (in the usual sense).</p> <p>You can also add several infinities in both directions. The maximal compactification (the most infinities you can consistently add) is the <a href="https://en.wikipedia.org/wiki/Stone%E2%80%93%C4%8Cech_compactification" rel="nofollow noreferrer">Stone–Čech compactification</a> which is horribly large: there are more infinities than real numbers. The infinity can somehow branch in a peculiar way, but I will not go any deeper here. This is just to show that you can consider far more exotic infinities if you want to.</p> <p>Let us then turn to the complex plane. The most common compactification is the one-point one (known as the Riemann sphere), where a single infinity <span class="math-container">$\tilde\infty$</span> is added. In this compactification <span class="math-container">$ki$</span> tends to <span class="math-container">$\tilde\infty$</span> as <span class="math-container">$k\to\infty$</span>.</p> <p>One alternative is a radial compactification, adding one point at each direction. (Formally, you can take a radial diffeomorphism of the complex plane to the open unit disc, and the compactification will be the closed disc.) In this compactification you can describe infinities as <span class="math-container">$\lambda\infty$</span>, where <span class="math-container">$\lambda$</span> is a complex number of unit length. In this compactification <span class="math-container">$ki$</span> tends to <span class="math-container">$i\infty$</span> as <span class="math-container">$k\to\infty$</span>.</p> <p>The complex plane also has Stone–Čech compactification. Something yet different happens to your sequence there. (The space <span class="math-container">$\beta\mathbb C$</span> is not sequentially compact unless I'm mistaken, and I'm not sure if this particular sequence even has a limit. But that's not even important for this answer.)</p> <p>So, the limit <span class="math-container">$\lim_{k\to\infty}ik$</span> can be different things depending on how you view the infinity. There is no single correct answer. The same applies to <span class="math-container">$i\lim_{k\to\infty}k$</span>, although multiplication does not always extend nicely to limit objects. In the two reasonable compactifications of the complex plane I presented, multiplication makes sense and commutes with the limit. For the radial compactification <span class="math-container">$i\lim_{k\to\infty}k$</span> is indeed <span class="math-container">$i\infty$</span>, but for the one-point one it is <span class="math-container">$i\tilde\infty=\tilde\infty$</span>.</p>
1,317,610
<p>Let $u = u(t,x)$ satisfy the PDE $$ \frac{\partial u}{\partial t} = \frac{1}{2}c^2\frac{\partial^2 u}{\partial x^2} + (a + bx)\frac{\partial u}{\partial x} + f u, $$ where $a,b,c,f \in \mathbb{R}$ are constant.</p> <p>I'm aware of solution methods for when $c \propto x^2$ (so not constant) and $a = 0$, for which I would make the change of variables $x \mapsto \log x$ to make it constant coefficient, use the Fourier transform to make it an ODE and solve from there. This seemingly easier PDE has got me stumped, though, and I would appreciate a push in the right direction!</p>
Joffan
206,402
<p>Your calculation has a lot of overcounting, whenever there is more than one vowel present.</p> <p>If you really want to avoid (or, say, cross-check) the "negative space" method of simply excluding options with no vowels, you could perhaps sum through the possibilities of where the first vowel is:</p> <ul> <li>Vowel in first place: $5\cdot 26^7 $ ways</li> <li>First vowel in second place: $21\cdot 5\cdot 26^6 $ ways</li> <li>First vowel in third place: $21^2\cdot 5\cdot 26^5 $ ways</li> <li>etc.</li> </ul> <p>Total</p> <p>$$5\cdot 26^7 + 21\cdot 5\cdot 26^6 + 21^2\cdot 5\cdot 26^5 + 21^3\cdot 5\cdot 26^4 \\ \quad\quad+ 21^4\cdot 5\cdot 26^3 + 21^5\cdot 5\cdot 26^2 + 21^6\cdot 5\cdot 26 + 21^7\cdot 5 \\ = 5 \sum_{k=0}^7 21^k \,26^{7-k}$$</p> <p>Not easy.</p> <hr> <p>Another method is to calculate options for an exact number of vowels. This can be calculated by setting the pattern in one step, eg for three vowels:</p> <p>$$BBABBAAB$$</p> <p>which is ${8 \choose 3}$, then multiplying by the options for consonants and vowels respectively, so </p> <p>$${8 \choose 3}5^3\,21^5$$</p> <p>for exactly three vowels. Then add up all options of interest (or, if simpler, add up the non-qualifying options and subtract from total).</p>
779,696
<p>So my problem is:</p> <p>$$\arcsin (x) = \arccos (5/13)$$ </p> <p><strong>^ Solve for $x$.</strong></p> <p>How would I begin this problem? Do I draw a triangle and find the $\sin(x)$ or is there a more algebraic way of doing this? Thanks in advance for any help.</p>
Community
-1
<p>I believe that you can without loss of generality check the independence of $A_j$ and $A_{j+1}$, since, if these are independent then $A_j$, $A_k$ are independent for all $j \not= k$.</p> <p>So, since $P(A_j \cap A_{j+1})$ is the same as $(j+1)$th draw yielding a consecutive number and $(j+2)$th draw again yielding a consecutive number, $$ P(A_j \cap A_{j+1}) = \left(\frac{n-j}{n}\right)\left(\frac{n-(j+1)}{n}\right)=P(A_j)P(A_{j+1}) $$ so they are independent. </p>
4,114,180
<p>The Theorem is as follows:</p> <p>For any numbers x and y, the following statements are true:</p> <ol> <li><span class="math-container">$|x|&lt;y$</span> if and only if <span class="math-container">$-y&lt;x&lt;y$</span></li> <li><span class="math-container">$|x|\leq{y}$</span> if and only if <span class="math-container">$-y\leq{x}\leq{y}$</span></li> <li><span class="math-container">$|x|\geq{y}$</span> if and only if either <span class="math-container">$x\leq{-y}$</span> or <span class="math-container">$x\geq{y}$</span></li> <li><span class="math-container">$|x|&gt;y$</span> if and only if either <span class="math-container">$x&lt;{-y}$</span> or <span class="math-container">$x&gt;{y}$</span></li> </ol> <p>Here's my progress so far, <span class="math-container">$2|x|-3\geq{|x-1|}\\ \implies 2\left(|x|-\frac{3}{2}\right)\geq{|x-1|}\\ \implies |x|\geq{\frac{|x-1|}{2}+\frac{3}{2}}\\ \implies |x|\geq{\frac{|x-1|+3}{2}} \text{ can now use part 3.}\\ \implies x\leq{\frac{-|x-1|-3}{2}} \lor x\geq{\frac{|x-1|+3}{2}}\\$</span></p> <p>I had this idea of replacing <span class="math-container">$|x-1|$</span> with 1 since it is the distance from 1 to x on the number line, but that is flawed since we could've done this initially and gotten the answer. I'm not sure where to go from here. Maybe I could've applied part 2 of the proof to the beginning of <span class="math-container">$|x-1|$</span> but then I'd arrive at <span class="math-container">$-(2|x|-3)\leq{|x-1|}\leq{2|x|-3}$</span> which doesn't help me at all. Any tips or hints?</p>
fleablood
280,126
<p>Hmm... well, to take a mallet and force things to fit.</p> <p><span class="math-container">$2|x|-3 \ge |x-1| \iff$</span></p> <p><span class="math-container">$-2(|x| - 3) \le x-1 \le 2|x| - 3\iff$</span></p> <p><span class="math-container">$-2|x|+3 \le x-1 \le 2|x| - 3 \iff$</span></p> <p>(<span class="math-container">$-2|x|+3 \le x-1$</span>) and (<span class="math-container">$x -1 \le 2|x|-3$</span>)<span class="math-container">$\iff$</span></p> <p>(<span class="math-container">$-2|x|\le x-4$</span>) and (<span class="math-container">$x + 2 \le 2|x|$</span>)<span class="math-container">$\iff$</span></p> <p>(<span class="math-container">$2|x|\ge 4-x$</span>) and <span class="math-container">$(|x| \ge \frac{x+2}2$</span>)<span class="math-container">$\iff$</span></p> <p>(<span class="math-container">$2x \le x-4$</span> or <span class="math-container">$2x\ge 4-x$</span>) and (<span class="math-container">$x\le -\frac x2 - 1$</span> or <span class="math-container">$x \ge \frac x2 + 1$</span>)<span class="math-container">$\iff$</span></p> <p>(<span class="math-container">$x \le -4$</span> or <span class="math-container">$3x \ge 4$</span>) and (<span class="math-container">$\frac 32 x \le -1$</span> or <span class="math-container">$\frac 12x \ge 1$</span>)<span class="math-container">$\iff$</span></p> <p>(<span class="math-container">$x\le -4$</span> or <span class="math-container">$x\ge \frac 43$</span>) and (<span class="math-container">$x \le -\frac 23$</span> or <span class="math-container">$x \ge 2$</span>)<span class="math-container">$\iff$</span></p> <p>And combining these this can only be true if:</p> <p>If <span class="math-container">$x \le -4$</span> then the LHS and RHS are both true.</p> <p>If <span class="math-container">$-4 &lt; x &lt; \frac 43$</span> then LHS is false so it isnt true.</p> <p>If <span class="math-container">$\frac 43 \le x &lt; 2$</span> then RHS is false so it isn't true.</p> <p>And if <span class="math-container">$x \ge 2$</span> the the LHS and RHS are both true.</p> <p>So this is true if and only if <span class="math-container">$x\le -4$</span> or <span class="math-container">$x \ge 2$</span>.</p> <p>......</p> <p>I really don't recommend forcing yourself to use a theorem when there is a more common sense way of just taking cases where <span class="math-container">$x &lt;0; 0\le x &lt; 1; x &gt; 1$</span> and doing it directly.</p> <p>(that is <span class="math-container">$x&lt; 0$</span> then <span class="math-container">$-2x-3 \ge 1-x\implies x \le -4$</span>.)</p> <p>(<span class="math-container">$0\le x &lt; 1$</span> then <span class="math-container">$2x-3 \ge 1-x$</span> \implies <span class="math-container">$x\ge \frac 43$</span> which it isn't on teh interval <span class="math-container">$0\le x &lt; 1$</span>.)</p> <p>(<span class="math-container">$x \ge 1$</span> then <span class="math-container">$2x -3 \ge x-1\implies x\ge 2$</span>)</p> <p>(So <span class="math-container">$x\le -4$</span> or <span class="math-container">$x \ge 2$</span>.)</p>
200,920
<p>I would like to create an array f containing n indices. The label of those indices is stored in a liste of length n, let's call it "list".</p> <p>So I would like to have something like :</p> <blockquote> <p>{f[list[[0]]], f[list[[1]],...}</p> </blockquote> <p>The point is to affect the f[liste[[i]]] to some values after.</p> <p>I tried to make a table, but the problem is that the indices cannot be a list from what I have seen (in the sense that they must be regularly spaced).</p> <p>I also tried to use the array function but I have the same problem, I cannot specify the indices as being a list, it must be a regular spacing.</p> <p>How can I do it ?</p>
Alex Trounev
58,388
<p>We can increase <code>n</code>, reducing the accuracy of calculations, for example</p> <pre><code>f[m_, p_] := Block[{n = m, <span class="math-container">$MinPrecision = p, $</span>MaxPrecision = p}, intVars = Table[{Subscript[t, i], -\[Infinity], \[Infinity]}, {i, 1, n}]; poly = Sum[ Subscript[t, i]^4, {i, 1, n}] + (1 - Sum[Subscript[t, i], {i, 1, n}])^4; NIntegrate[E^-poly, ##, PrecisionGoal -&gt; p/2, AccuracyGoal -&gt; p/2] &amp; @@ intVars] f[4, 6] (* 4.47148*) f[5, 4] (* 7.75615*) f[6, 2] (* 15.8289*) </code></pre>
2,655,518
<p>$2ac=bc$ find the ratio ( $K$ ) what is the ratio of their area? <a href="https://i.stack.imgur.com/9NPRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NPRi.png" alt="enter image description here"></a>I found out it is $2$ or $1/2$ is it true? </p> <p>if the question isn't clear, make sure to notify me, I will make an effort to make it understandable </p>
Patrick Stevens
259,262
<p>We say that $y$ <strong>is a square root of $x$</strong> if $y^2 = x$.</p> <p>We define a function $\sqrt{\cdot} : \mathbb{R}^+ \to \mathbb{R}$ ("the square root function") by $$\sqrt{x} := \text{the nonnegative number $y$ such that $y^2 = x$}$$</p> <p>So you can see that $\sqrt{x}$ is a square root of $x$.</p> <p>Not every square root of $4$ is equal to $\sqrt{4} = 2$. It turns out to be the case that $-\sqrt{4} = -2$ is also a square root of $4$.</p> <p>When we refer to <em>the</em> square root of $x$, we mean $\sqrt{x}$; that is, the unique nonnegative number which squares to give $x$. When we refer to <em>a</em> square root of $x$, we mean any of the numbers which square to give $x$. It is a fact that there are usually two of these, and that one is the negative of the other; so in practice, we may refer to $\pm \sqrt{x}$ if we wish to identify all the square roots of a number. Only the positive one - that is, $\sqrt{x}$ - is the "principal" square root (or "the square root", or if it's really confusing from context, "the positive square root"); but both are square roots.</p>
1,081,447
<p>I'm talking about a Roulette wheel with $38$ equally probable outcomes. Someone mentioned that he guessed the correct number five times in a row, and said that this was surprising because the probability of this happening was $$\left(\frac{1}{38}\right)^5$$</p> <p>This is true if you only play the game $5$ times. However, if you play it more than $5$ times there's a higher (should be much higher?) probability that you'll get $5$ in a row at <em>some point</em>. </p> <p>I was thinking about <em>how</em> surprised this person should be at their streak of $m$ correct guesses given that they play $n$ games, each with probability $p$ of success. It makes intuitive sense that their surprise should be proportional to $1/q$ (or maybe $\log(1/q)$ since $1$ in a billion doesn't surprise you $10$ times more than $1$ in $100$ million), where $q$ is the probability that they get at least one streak of $m$ correct guesses at some point in their $n$ games. </p> <p>So, with the Roulette example I was thinking about, $p=1/38$ and $m=5$. </p> <p>I tried to find an explicit formula for $q$ in terms of $n$, and encountered some difficulty, because of the non-independence of "getting a streak in the first five tries" and "getting a streak in tries $2$ through $6$" (if the first is a failure, it's much more likely that the second will be too). </p> <hr> <p>In summary, two questions:</p> <ol> <li><p>How do I find the probability that you get $5$ correct guesses in a row at some point if you play $n$ games of Roulette?</p></li> <li><p>More generally, what is the probability that you get $m$ successes at some point in a series of $n$ events, each with probability $p$ of success? </p></li> </ol> <p>The variables satisfy $\,\,\,m,n \in \mathbb{N}$, $\,\,\,m\leq n$, $\,\,\,p \in \mathbb{R}$, $\,\,\,0 \leq p \leq 1$.</p> <hr> <p>If we write the answer to the second question as a function $q(m,n,p)$, then we can say that $q$ should be increasing with $n$, decreasing with $m$, and increasing with $p$. It should equal $p^n$ when $m=n$ and should equal $1$ when $p=1$ and $0$ when $p=0$. </p> <p>I feel as though this should be a basic probability problem, but I'm having trouble solving it. Maybe some kind of recursive approach would work? Given $q(n,m,p)$, I think I could write $q(n+1,m,p)$ using the probability that the last $m-1$ results are all successes ...</p>
Empy2
81,790
<p>You have a six-state system.<br> State 1: Not on a run. Either you haven't started, or the last guess was wrong.<br> State 2: The last guess was correct.<br> State 3: The last two guesses were correct.<br> State 4: The last three guesses were correct.<br> State 5: The last four guesses were correct.<br> State 6: You have a 5-in-a-row.<br> The transition matrix is $$A=\left(\begin{array}{cccccc} 1-p&amp;p&amp;0&amp;0&amp;0&amp;0\\ 1-p&amp;0&amp;p&amp;0&amp;0&amp;0\\ 1-p&amp;0&amp;0&amp;p&amp;0&amp;0\\ 1-p&amp;0&amp;0&amp;0&amp;p&amp;0\\ 1-p&amp;0&amp;0&amp;0&amp;0&amp;p\\ 0&amp;0&amp;0&amp;0&amp;0&amp;1 \end{array}\right)$$<br> The initial vector is $\vec{v}=(1,0,0,0,0,0)$<br> To find the probabilities after $n$ rounds, calculate $\vec{v}A^n$</p>
832,710
<p>Does there exist an algebraic structure $(\mathbb{K},+)$ such that equations of the form $x+a=x+b$, $a\neq b$ have solutions for all $a,b\in \mathbb{K}$?</p>
Ragavendar Nannuri
427,504
<p>Let $L$ be the width of the lane. Consider the segment of straight line passing through one corner of the lane and the road, going from the opposite side of the road to the opposite side of the lane. Suppose segment makes an angle $\theta\in (0,\frac{\pi}{2})$ with the road. The lenght of such segment is a function $f$ of $\theta$: $$f(\theta)=\frac{L}{\cos \theta}+\frac{64}{\sin \theta}$$ where $L$ is the width of the lane. Note that $$\lim_{\theta \to 0}f(\theta)=\lim_{\theta \to \frac{\pi}{2}}f(\theta)=+\infty$$ So $f$ has a minimum in $(0,\frac{\pi}{2})$</p> <p>Let $L$ be the minimun width of the lane such that a pole 125 feet long from the road into the lane, keeping it horizontal. Then, when we minimize $f$ we will find an angle $\alpha\in (0,\frac{\pi}{2})$, such that $$\frac{L}{\cos \alpha}+\frac{64}{\sin \alpha}=125 \qquad \qquad(1)$$ On the other hand, we will also have $f'(\alpha)=0$, which means $$\frac{\sin \alpha}{\cos^2 \alpha}L-\frac{\cos \alpha}{\sin^2 \alpha}64=0$$ So $$L=\frac{\cos^3 \alpha}{\sin^3 \alpha}64$$ Replacing $L$ in $(1)$ and multiplying by $\sin^3 \alpha$, we have</p> <p>$$64 \cos^2 \alpha + 64 \sin^2 \alpha =125 \sin^3 \alpha$$ So $$64 =125 \sin^3 \alpha$$ So $$ \sin \alpha =\frac{4}{5} \qquad \textrm{ and }\qquad \cos \alpha =\frac{3}{5}$$ Replacing them in $(1)$, and dividing by $5$, we get $$\frac{L}{3}+\frac{64}{4}=25$$ So $$L=27$$ The minimum width of the the lane is $27$ feet.</p>