text
stringlengths
1
2.12k
source
dict
# The converse of “nilpotent elements are zero-divisors” For commutative rings $$A$$ with identity $$1\ne0$$, nilpotent elements are zero-divisors. The converse is false, i.e. there is a commutative ring $$A$$ with identity $$1\ne0$$ and a zero-divisor $$x$$ in $$A$$ which is not nilpotent. Where is such an example? Is it meaningful to ask the "percentage" of commutative rings $$A$$ with identity $$1\ne0$$ for which the converse holds, i.e. zero-divisors are nilpotent? • Have you thought about the integers modulo $n$? – Lord Shark the Unknown Dec 22 '18 at 12:45 • You can also search yourself at this site, e.g. here, for useful links. – Dietrich Burde Dec 22 '18 at 12:51 • @Lord Shark the Unknown: Thank you. Right, in $\mathbb{Z}/10\mathbb{Z}$, $2$ is a zero-divisor but not nilpotent. How did you came up with this example? How about the next question? – user584333 Dec 22 '18 at 12:52 • Sai, have a look at this post, which gives an answer which rings have this property. – Dietrich Burde Dec 22 '18 at 12:56 • @Dietrich Burde: Thank you for the reference. – user584333 Dec 22 '18 at 13:01 An example is in the integers mod $$6$$, where $$2\cdot 3=0$$, but no power of either of these individually is zero. If we had some sort of measure on a set of rings (which could not possibly be all rings, because there are too many) with finite total measure, we could ask about proportion. I know of no such commonly used measure, but it's possible such a thing has been considered. In my intuition, the proportion would be small, but I can't think of a quick reason to see why, other than that being nilpotent seems like a special property while being a zero divisor seems very common. For example, the only time this is true for rings of integers modulo $$n$$ is when $$n$$ is a prime power, and there are vanishingly few of those compared to all integers.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773708039218115, "lm_q1q2_score": 0.8559686455971849, "lm_q2_score": 0.875787001374006, "openwebmath_perplexity": 193.22849541817598, "openwebmath_score": 0.8839941620826721, "tags": null, "url": "https://math.stackexchange.com/questions/3049427/the-converse-of-nilpotent-elements-are-zero-divisors" }
If you look at the answer by rschwieb in the question Under what conditions does a ring R have the property that every zero divisor is a nilpotent element? linked above in a comment by Dietrich Burde, which is not the accepted answer, it characterizes these rings. A ring has this property if and only if it is the quotient of an arbitrary commutative ring by a primary ideal. An ideal $$I$$ is said to be primary if whenever we have that $$a,b\in R$$ satisfy $$ab\in I$$, then we have either $$a\in I$$ or $$b^n\in I$$ for some $$n>0$$. This answers part of your question. It is nicely illustrated in the $$\Bbb Z_n$$ case: an ideal $$n\Bbb Z$$ of $$\Bbb Z$$ is primary if and only if $$n=p^k$$ for some prime $$p$$ and integer $$k>0$$. So rings like these are actually "close" to integral domains, which are quotients of arbitrary commutative rings by prime ideals. Again, no quantitative argument for why they should be rare, only an intuitive one. • Couldn't one equip a ring $R$ with the Zariski topology, then produce a Borel sigma algebra, and equip it with a pre-measure, which can then be extended to a measure by Caratheodory's extension theorem, to at least begin discussing "percentages" when the ring itself is of finite measure? – Chickenmancer Dec 22 '18 at 21:44 • @Chickenmancer that wouldn't give a percentage of rings, I suspect you mean to give a percentage of zero divisors that are nilpotent? – jgon Dec 22 '18 at 21:50 • Ah, I misinterpreted the question as "what percentage of a ring $A$." Thanks for clarifying, @jgon. – Chickenmancer Dec 22 '18 at 22:15 • Curious about the downvote. An explanation would be helpful. – Matt Samuel Dec 23 '18 at 17:43 One perspective on what proportion of rings have no nonnilpotent zero divisors is the following. Note that this is really rough first thought.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773708039218115, "lm_q1q2_score": 0.8559686455971849, "lm_q2_score": 0.875787001374006, "openwebmath_perplexity": 193.22849541817598, "openwebmath_score": 0.8839941620826721, "tags": null, "url": "https://math.stackexchange.com/questions/3049427/the-converse-of-nilpotent-elements-are-zero-divisors" }
First let's be careful though, rings with no zero-divisors trivially have this property, so I'll try to consider what rings with zero-divisors have the property that all zero-divisors are nilpotent. Step 1: Set up a space to parametrize a nice class of rings Let $$k$$ be an algebraically closed field. Let $$n,d_1,d_2 > 0$$. Let $$x=(x_1,x_2,\ldots,x_n)$$ be coordinates on $$\AA^n_k$$. Let $$r_1=\binom{n+d_1}{d_2}$$, which is the number of monomials in $$k[x_1,\ldots,x_n]$$ of degree at most $$d_1$$. Thus there are $$r_1$$ multiindices $$I$$ of degree at most $$d_1$$. Let $$r_2=\binom{n+d_2}{d_2}$$ as well. Let $$c=(c_I)$$ be coordinates on $$\PP^{r_1-1}_k$$, and let $$d=(d_J)$$ be coordinates on $$\PP^{r_2-1}_k$$ as $$I$$ ranges over multiindices of degree at most $$d_1$$ and $$J$$ ranges over multiindices of degree at most $$d_2$$. Then consider the subvariety, $$V$$, of $$\AA^n_k\times_k\PP^{r_1-1}_k\times_k \PP^{r_2-1}_k$$ cut out by the polynomial $$f_{c,d}(x)=g_c(x)g_d(x):=\left(\sum_I c_Ix^I\right)\left(\sum_J d_Jx^J\right).$$ $$V$$ is equipped with a natural map $$V\to \PP^{r_1-1}_k\times_k\PP^{r_2-1}$$, and the fiber over a point $$(a,b)$$ in $$\PP^{r_1-1}_k\times_k\PP^{r_2-1}_k$$ is the subvariety of $$\AA^n_k$$ cut out by the degree at most $$d_1+d_2$$ reducible polynomial $$f_{a,b}(x)=g_a(x)g_b(x).$$ This subvariety of $$\AA^n_k$$ can be thought of as corresponding to the ring $$k[x_1,\ldots,x_n]/(g_ag_b)$$, so we can think of our variety $$V$$ as parametrizing a certain nice class of rings all (except for a Zariski closed subset where $$g_a=0$$ or $$g_b=0$$) of which are guaranteed to have zero-divisors, since $$g_a$$ and $$g_b$$ are zero divisors. Step 2: Investigate the points corresponding to rings with the desired property and related properties We can then ask if we can characterize the subset of $$V$$ corresponding to rings in which all zero-divisors are nilpotent.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773708039218115, "lm_q1q2_score": 0.8559686455971849, "lm_q2_score": 0.875787001374006, "openwebmath_perplexity": 193.22849541817598, "openwebmath_score": 0.8839941620826721, "tags": null, "url": "https://math.stackexchange.com/questions/3049427/the-converse-of-nilpotent-elements-are-zero-divisors" }
Well, what are the zero-divisors in $$k[x_1,\ldots,x_n]/(f_{a,b})$$? Since $$k[x_1,\ldots,x_n]$$ is a UFD, handily enough, the zero-divisors are precisely the factors of $$f_{a,b}$$. Moreover $$k[x_1,\ldots,x_n]/(f_{a,b})$$ has nilpotents if and only if $$f_{a,b}$$ is not square free (take the radical of $$f_{a,b}$$, it will be nilpotent if and only if it is nonzero, if and only if $$f_{a,b}$$ is not square free). However the ring $$k[x_1,\ldots,x_n]/(f_{a,b})$$ satisfies our property that all zero-divisors are nilpotent if and only if $$f_{a,b}$$ is a power of an irreducible polynomial. Step 3: Conclude Note that if $$f_{a,b}$$ is square-free if and only if it is relatively prime to $$f_{a,b}'$$, which is true if and only if $$\Disc(f_{a,b})\ne 0$$. Thus every point $$(a,b)\in\PP^{r_1-1}_k\times_k\PP^{r_2-1}_k$$ corresponding to a ring with any nilpotents at all, let alone one in which every zero-divisor is nilpotent, satisfies a polynomial equation, $$\Disc(f_{a,b})=0$$. Thus rings in our parametrized class with any nilpotents at all correspond to a (proper) Zariski closed subset of $$\PP^{r_1-1}_k\times_k\PP^{r_2-1}_k$$, which is irreducible (see here). Thus rings in our parametrized class satisfying your property (all zero-divisors are nilpotent) are contained in a codimension 1 class of rings (any nilpotents). Hence almost all rings in our parametrized class have zero-divisors that are not nilpotent. Notes This is really rough. The parametrized class of rings is a very specific, very nice subset of $$k$$-algebras. Nonetheless, hopefully it will give you (or other readers) intuition on why very few rings should have the property you want (as long as they have some zero divisors of course).
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773708039218115, "lm_q1q2_score": 0.8559686455971849, "lm_q2_score": 0.875787001374006, "openwebmath_perplexity": 193.22849541817598, "openwebmath_score": 0.8839941620826721, "tags": null, "url": "https://math.stackexchange.com/questions/3049427/the-converse-of-nilpotent-elements-are-zero-divisors" }
I needed to eliminate integral domains because I chose to parametrize hypersurfaces, and most hypersurfaces are irreducible. (Check out the link, Qiaochu Yuan gives a really nice quick proof of this fact). There's a reasonable chance that I wouldn't have needed to eliminate integral domains if I'd chosen e.g. codimension 2 subvarieties, but those are much harder to characterize. • I was thinking of excluding integral domains as well, but notice for integers that isn't necessary. A ring of integers modulo $n$ is an integral domain only if $n$ is prime. – Matt Samuel Dec 23 '18 at 0:41 • @MattSamuel, I originally didn't think to do so, but unfortunately, the particular class of rings I chose to parametrize requires me to do so, since most hypersurfaces in dimensions $\ge 2$ are irreducible. I suspect I wouldn't have to eliminate integral domains if I'd chosen e.g. codimension 2 subspaces, but those are harder to work with. – jgon Dec 23 '18 at 1:07 • I'll add a link to a great answer by Qiaochu Yuan justifying the claim that most hypersurfaces are irreducible when I eventually get back on a computer and I remember. – jgon Dec 23 '18 at 1:08 Another class of examples are matrix rings over a field. A $$n\times n$$ matrix $$A$$ is a zero-divisor if and only if $$A$$ is not invertible. Certainly if $$A$$ is a zero divisor then $$A$$ cannot be invertible. If $$A$$ is not invertible then $$\ker A \ne 0$$ and taking any non-zero matrix $$B$$ with $$\operatorname{col}B \subseteq \ker A$$ you have $$AB = 0$$. For example, let $$v$$ be a non-zero vector with $$Av = 0$$ and let $$B$$ be the matrix whose columns are all $$v$$. The nilpotent matrices are rare among zero-divisors. Specifically, nilpotent matrices are solutions to the equation $$A^n = 0$$. As a rule, there are always more non-solutions to a polynomial equation than there are solutions.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773708039218115, "lm_q1q2_score": 0.8559686455971849, "lm_q2_score": 0.875787001374006, "openwebmath_perplexity": 193.22849541817598, "openwebmath_score": 0.8839941620826721, "tags": null, "url": "https://math.stackexchange.com/questions/3049427/the-converse-of-nilpotent-elements-are-zero-divisors" }
In precise terms, we look at the zero set $$\{A : \det A = 0\}$$ (the set of zero-divisors). This zero-set is a manifold if the underlying field is $$\mathbf{C}$$ and is a variety in the general case. The non-nilpotents are dense in the Zariski topology looking over a general ring, and dense in the classical topology looking over $$\mathbf C$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773708039218115, "lm_q1q2_score": 0.8559686455971849, "lm_q2_score": 0.875787001374006, "openwebmath_perplexity": 193.22849541817598, "openwebmath_score": 0.8839941620826721, "tags": null, "url": "https://math.stackexchange.com/questions/3049427/the-converse-of-nilpotent-elements-are-zero-divisors" }
# Positive or negative remainder Is 23 = 5(-4)-3 gives a remainder -3 when divided by 5 ? is this statement true ? some of my colleagues said that remainder cannot be negative numbers as definition but I am doubt that can -3 be a remainder too? fresh_42 Mentor Is 23 = 5(-4)-3 gives a remainder -3 when divided by 5 ? is this statement true ? some of my colleagues said that remainder cannot be negative numbers as definition but I am doubt that can -3 be a remainder too? Usually we consider entire equivalence classes in such cases: Every single element of ##\{\ldots -13, -8, -3, 2, 7 , 12, \ldots\}## belongs to the same remainder of a division by ##5##. We then define all five possible classes ##\{\ldots -15, -10, -5, 0, 5, 10, \ldots\}## ##\{\ldots -14, -9, -4, 1, 6, 11, \ldots\}## ##\{\ldots -13, -8, -3, 2, 7, 12, \ldots\}## ##\{\ldots -12, -7, -2, 3, 8, 13, \ldots\}## ##\{\ldots -11, -6, -1, 4, 9, 14, \ldots\}## as elements of a new set with five elements ##\{ \; \{\ldots -15, -10, -5, 0, 5, 10, \ldots\}\, , \, \{\ldots -14, -9, -4, 1, 6, 11, \ldots\}\, , \, \ldots \}##. This notation is a bit nasty to handle, so we choose one representative out of every set. E.g. ##\{[-15],[-9],[12],[3],[-1]\}## could be chosen, but this is still a bit messy to do calculations with. So the most convenient representation is ##\{[0],[1],[2],[3],[4]\}## with the non-negative remainders smaller than ##5##. However, this is only a convention. ##-3## is a remainder, too, belonging to the class ##[2]##. So the answer to your questions is: The statement is true, as all integers are remainders. mfb Mentor The remainder is usually required to be between 0 and N-1 inclusive. 23 and -2 (not -3) are in the same equivalence class. This can also be written as 23 = -2 mod 5.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707993078212, "lm_q1q2_score": 0.8559686368033291, "lm_q2_score": 0.8757869965109765, "openwebmath_perplexity": 705.3507974379489, "openwebmath_score": 0.8385798335075378, "tags": null, "url": "https://www.physicsforums.com/threads/positive-or-negative-remainder.896236/" }
The remainder is usually required to be between 0 and N-1 inclusive. 23 and -2 (not -3) are in the same equivalence class. This can also be written as 23 = -2 mod 5. Sorry it should be -23 = 5(-4) - 3 , so in conclusion is, this statement true ? jbriggs444 Homework Helper Sorry it should be -23 = 5(-4) - 3 , so in conclusion is, this statement true ? "-23 divided by 5 is -4 with a remainder of -3". I would consider that statement true. "-23 divided by 5 is -5 with a remainder of 2". I would also consider that statement to be true. The convention you use for integer division will determine which of those statements is conventional and which is unconventional. In many programming languages, integer division follows a "truncate toward zero" convention. For instance, in Ada, -23/5 = -4. The "rem" operator then gives the remainder. So -23 rem 5 = -3. If one adopts a convention that integer division (by a positive number) truncates toward negative infinity then one would get a different conventional remainder. -23/5 would be -5 and -23 mod 5 would be +2. The Ada "mod" operator uses this convention. In mathematics, one typically adopts the line of reasoning given by @fresh_42 in post#2 above. The canonical exemplar in the equivalence class of possible remainders is normally the one in the range from 0 to divisor - 1.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707993078212, "lm_q1q2_score": 0.8559686368033291, "lm_q2_score": 0.8757869965109765, "openwebmath_perplexity": 705.3507974379489, "openwebmath_score": 0.8385798335075378, "tags": null, "url": "https://www.physicsforums.com/threads/positive-or-negative-remainder.896236/" }
Homework Help: Help with circular motion 1. Dec 15, 2004 newcool Hi, I recieved this problem today. If you put a small marble of mass m on top of a large round object that has a radius r, and then release the marble, when will it come off of the side so that the large round object will not be in contact with the marble? The answer is arccosine of 2/3. I have tried making free body diagrams of the object when the normal force is 0 and got this equation: a * cosin(theta) = g. I am stuck as to how to get the right answer. Thanks for any help 2. Dec 15, 2004 Staff: Mentor As the marble rolls down the sphere, it will maintain contact as long as there is sufficient force to produce the required centripetal acceleration. Ask yourself: What provides the centripetal force? How does the required centripetal acceleration depend on the the angle that the marble makes with the vertical? 3. Dec 20, 2004 newcool I made a free body diagram of when the sphere has a normal force of 0. I got one force,F_c pointing in towards the center and another force mg pointing down. Taking the y value of F_c, i got that: $$F_c*cos(\theta) + mg = ma$$ $$mv^2/r * cos(\theta) + mg = mv^2/r$$ $$v^2/r * cos(\theta) + g = v^2/r$$ I tried a different approach using the fact that the sum of all energies is equal to 0. $$0 = \Delta K + \Delta U_g$$ $$0 = 1/2 *mv^2 + -mgh$$ $$mgh = 1/2 *mv^2$$ $$gh = 1/2 *v^2$$ $$h = r - r*cos(\theta)$$ so $$gr(1 - cos(\theta)) = 1/2 * v^2$$ However, I have gotten nowhere with these equations. Any input on what I did wrong would be appreciated. Thanks 4. Dec 20, 2004 Pyrrhus Ok the answer is acrosinus of 2/3 = 48.2 degrees. Force analysis $$n - mg \cos \theta = -m \frac{v^2}{R}$$ The object will fall when n = 0 so $$v^2 = Rg \cos \theta$$ when it loses contact with the surface Assuming isolated system I can use Conservation of mechanical energy $$K + \Omega = K_{0} + \Omega_{0}$$ $$\frac{1}{2}mv^2 + mgR \cos \theta = 0 + mgR$$
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707986486796, "lm_q1q2_score": 0.8559686330574059, "lm_q2_score": 0.8757869932689566, "openwebmath_perplexity": 693.0273276151378, "openwebmath_score": 0.743645191192627, "tags": null, "url": "https://www.physicsforums.com/threads/help-with-circular-motion.56897/" }
$$K + \Omega = K_{0} + \Omega_{0}$$ $$\frac{1}{2}mv^2 + mgR \cos \theta = 0 + mgR$$ Plugging our speed when it loses contact $$\frac{1}{2}mRg \cos \theta + mgR \cos \theta = mgR$$ which gives: $$\cos \theta = \frac{2}{3}$$ $$\theta = 48.2^{o}$$ which is the angle when it will lose contact with the surface. That's of course assuming an isolated system. 5. Dec 20, 2004 Staff: Mentor One big problem here: You seem to be treating the "centripetal force" as though it were a separate force (like friction or weight). Not so! "Centripetal" just means "towards the center": The centripetal force is just those real forces that act towards the center. The only forces acting on the mass m are: (1) mg, acting down, and (2) N, the normal force, acting normal to the surface. So what's the centripetal force? Just the components of those forces acting towards the center of the circular motion, thus producing the centripetal acceleration: $mg cos\theta - N = F_c = mv^2/r$ Of course, you'll set the normal force to zero, so: $mg cos\theta = mv^2/r$ Exactly correct. But that's not a "different approach"--it's a necessary part of solving this problem! Now just combine the two equations and solve for $\theta$. 6. Dec 20, 2004 Pyrrhus Too bad, i just did the work Well, also read Doc Al's explanation, it's quite good 7. Dec 20, 2004 newcool Thanks for the help everyone. Just one question, Doc, how did you get that $$mg cos\theta - N = F_c = mv^2/r$$ Isn't mg pointing straight down so the component that points towards the center is $$mg/cos\theta$$ 8. Dec 20, 2004 Pyrrhus We got a vector $$\vec{R}$$ with y component $$R \cos \theta$$ and x component $$R \sin \theta$$ where do you get $\frac{mg}{\cos \theta}$ ??? 9. Dec 20, 2004 Pyrrhus
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707986486796, "lm_q1q2_score": 0.8559686330574059, "lm_q2_score": 0.8757869932689566, "openwebmath_perplexity": 693.0273276151378, "openwebmath_score": 0.743645191192627, "tags": null, "url": "https://www.physicsforums.com/threads/help-with-circular-motion.56897/" }
$$R \sin \theta$$ where do you get $\frac{mg}{\cos \theta}$ ??? 9. Dec 20, 2004 Pyrrhus I think you got a misconception with centripetal force, Centripetal force is a role asigned to forces, because they act towards the center. The forces acting on the body are normal and the weight, and the components acting towards the center are equal to $m \frac{v^2}{r}$. 10. Dec 21, 2004 Staff: Mentor Yes, mg points straight down. Therefore its component towards the center will be $mg cos\theta$, not $mg/cos\theta$. (Draw yourself a picture.) I believe you are thinking like this: That mg is the vertical component of the "centripetal force" ($F_c$), so $F_c cos\theta = mg$ ==> $F_c = mg/cos\theta$. This is incorrect thinking. One thing you must realize is that "centripetal" is just a decription of the direction that a force has: it just means "towards the center". (Another term used is "radial".) It is not a kind of force. It's just like describing a force as a horizontal force. Think like this: The mass m must have an acceleration towards the center since it moves in a circle. So, let's apply Newton's 2nd Law in that radial (or centripetal) direction. As Cyclovenom and I have explained, the only forces acting on the mass are gravity and the normal force. We know that N points away from the center. What's the component of gravity (mg) towards the center? $mg cos\theta$. So: $$F_{towards-center} = ma_{towards-center}$$ $$mg cos\theta - N = mv^2/r$$ Make sense? 11. Dec 21, 2004 newcool Thanks for all the help, Doc and Cyclove, I understand it now
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707986486796, "lm_q1q2_score": 0.8559686330574059, "lm_q2_score": 0.8757869932689566, "openwebmath_perplexity": 693.0273276151378, "openwebmath_score": 0.743645191192627, "tags": null, "url": "https://www.physicsforums.com/threads/help-with-circular-motion.56897/" }
# Combination notation vs. Binomial Coefficient Formula I'm studying probability and statistics and had a question regarding notation. I noticed that combinations and the binomial coefficient are essentially the same thing, that is: $$\binom{n}{k}\ =\ _nC_k\ =\ \frac{n!}{(n-k)!k!}$$ But I was wondering, is there a particular difference between the two that people should be aware of? For example, are there certain use cases where one is preferred over the other? Thank you. • They are exactly the same thing, just different notation. I think the most popular (at least from what I see) is the $\binom{n}{k}$ notation, however I have seen $C^n_r$ used in probability courses. – Dave Jun 29 '18 at 2:48 • Also ${^n\mathrm C_r}$ – Graham Kemp Jun 29 '18 at 3:31 • Also $_r\text{C}^n$ – Dzoooks Jun 29 '18 at 4:04 All three expressions mean when dealing with combinations the same. But there are some aspects which should be considered. We often find the binomial coefficients $\binom{n}{k}$ resp. $_nC_k$ defined by factorials. \begin{align*} \binom{n}{k}:=\frac{n!}{k!(n-k)!}\qquad\qquad\text{resp.}\qquad\qquad _nC_k:=\frac{n!}{k!(n-k)!} \end{align*} From this point of view the factorials $n!$ can be seen as basic building blocks for the shorthand notations $\binom{n}{k}$ and $_nC_k$. Since using factorials is more fundamental than the other two representations I will consider in the following only $\binom{n}{k}$ and $_nC_k$. Historical aspects: • C. Jordan writes in his classic Calculus of Finite Differences (1939) • ($\mathrm{\S}$ 22): Since the binomial coefficient is without doubt the most important function of the Calculus of Finite Differences it was necessary to adopt some brief notation for this function. We accepted above the notation of J. L. Raabe [Journal für reine und angewandte Mathematik 1851, Vol. 42, p. 350] which is most in use now, putting \begin{align*} \binom{x}{n}=\frac{x(x-1)(x-2)\cdots(x-n+1)}{1\cdot2\cdot3\cdots n} \end{align*}
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707953529717, "lm_q1q2_score": 0.8559686254180844, "lm_q2_score": 0.8757869884059267, "openwebmath_perplexity": 772.823877360378, "openwebmath_score": 0.9258873462677002, "tags": null, "url": "https://math.stackexchange.com/questions/2835377/combination-notation-vs-binomial-coefficient-formula" }
\begin{align*} \binom{x}{n}=\frac{x(x-1)(x-2)\cdots(x-n+1)}{1\cdot2\cdot3\cdots n} \end{align*} • ($\mathrm{\S}$ 22, footnote 18): Euler first used the notation $\left[\frac{x}{n}\right]$ in Acta Acad. Petrop.V, 1781 and then $\left(\frac{x}{n}\right)$ in Nova Acta Acad. Petrop. XV. 1799-1802. Raabe's notation $\binom{x}{n}$ is a slight modification of the second. It is used for instance in: • Bierens de Haan, Tables d'Int$\mathrm{\acute{e}}$grales d$\mathrm{\acute{e}}$finies, Leide, 1867 • Hagen, Synopsis Vol. I. p. 57, Leipzig, 1891, • Pascal, Repertorium Vol. I, p. 47, Leipzig, 1910, • Encyclopädie der Math. Wissenschaften, 1898-1930, • L. M. Milne Thomson, Calculus of Finite Differences, 1933, • G. H. Hardy, Course of Pure Mathematics, p. 256, 1908 Euler's notation is also cited in • A History of Mathematical Notations by F. Cajori, which also provides some information about the use of $_nC_k$-like notations in $\mathrm{\S}$ 451: • George Peacock (Treatise of Algebra, 1830) introduces $C_r$ for the combinations of $n$ things taken $r$ at a time. • Robert Potts (Elementary Algebra, 1880) begins his treatment by letting the number of combinations of $n$ different things taken $r$ at a time be denoted by $^nC_r$ • W.A. Whitworth uses $C_r^n$ in Choice and Change (1886). • G. Chrystal writes $_nC_r$ in Algebra, Part II (1899). Note the different variations $C_r, {^{n}C}_r, C^n_r$ of $_nC_r$-like notations. Here is a selection of some classics from the 20th century. Some classics: • An Introduction to Combinatorial Analysis (1958) by J. Riordan • (3.1): ... and \begin{align*} C(n,r)=\frac{n(n-1)\cdots(n-r+1)}{r!}=\frac{n!}{r!(n-r)!}=\binom{n}{r} \end{align*} where the last symbol is that usual for binomial coefficients, that is, the coefficients in the expansion of $(a+b)^n$. ($C_r^n, C_n^r, _nC_r$ and $(n,r)$ are alternative notations; ... • Combinatorial Identities (1968) by J. Riordan
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707953529717, "lm_q1q2_score": 0.8559686254180844, "lm_q2_score": 0.8757869884059267, "openwebmath_perplexity": 772.823877360378, "openwebmath_score": 0.9258873462677002, "tags": null, "url": "https://math.stackexchange.com/questions/2835377/combination-notation-vs-binomial-coefficient-formula" }
• (1.1): Perhaps the simplest combinatorial entities are the binomial coefficients, that is, the combinations, for example of $n$ things, $k$ at a time. They take their name from the generating function for combinations, which is a power of a binomial, namely \begin{align*} (1+x)^n=\sum_{k=0}^n\binom{n}{k}x^k \end{align*} where, of course, $\binom{n}{K}=C(n,k)=\frac{n!}{k!(n-k)!}$ is the usual notation for a binomial coefficient. • (II.4): ... the number of subpopulations of size $r$ is therefore given by $(n)_r/r!$. Expressions of this kind are known as binomial coefficients and the standard notation for them is \begin{align*} \binom{n}{r}=\frac{(n)_r}{r!}=\frac{n(n-1)\cdots(n-r+1)}{1\cdot 2\cdots (r-1)\cdot r} \end{align*} • Advanced Combinatorics (1974) by L. Comtet • (1.4): ... We will adopt the notation $\binom{n}{k}$, used almost in this form by Euler and fixed by Raabe, with the exclusion of all other notations, as this notation is used in the great majority of the present literature, and its use is even so still increasing. This symbol has all the qualities of a good notation: economical (no new letters introduced), expressive (it is very close in appearence to the explicit value $\frac{(n)_k}{k!}$, typical (no risk of being confused with others), and beautiful. • Enumerative Combinatorics, Volume I (1986) by R. P. Stanley • (I.1.4): ... where $d_n=\sum_{i=0}^n\binom{n}{i}a_ib_{n-i}$, with $\binom{n}{i}=n!/i!(n-i)!$. and the author continues to use the notation $\binom{n}{k}$ without any more citations, indicating this notation being commonly used. • D. E. Knuth who gave us $\TeX$ is besides being a great mathematician an extraordinary expert in typography and mathematical writing. His Mathematical Typography (1979) gives us a glimpse of his deep thoughts about these issues. Another one being the report Mathematical Writing (1987) written together with T. Larrabee and P. M. Roberts.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707953529717, "lm_q1q2_score": 0.8559686254180844, "lm_q2_score": 0.8757869884059267, "openwebmath_perplexity": 772.823877360378, "openwebmath_score": 0.9258873462677002, "tags": null, "url": "https://math.stackexchange.com/questions/2835377/combination-notation-vs-binomial-coefficient-formula" }
In $\TeX$ we use the command "$\text{n \choose k}$" giving us $\binom{n}{k}$. The enhanced readability of the notation $\binom{n}{k}$ becomes rather obvious in more complex expressions. Compare for instance formula (5.32) in Concrete Mathematics by R. L. Graham, D. E. Knuth and O. Patashnik which is stated for integers $l,m,n$; $n\geq 0$ as \begin{align*} \sum_{j,k}(-1)^{j+k}\binom{j+k}{k+l}\binom{r}{j}\binom{n}{k}\binom{s+n-j-k}{m-j}=(-1)^l\binom{n+r}{n+l}\binom{s-r}{m-n-l} \end{align*} with the representation \begin{align*} \sum_{j,k}(-1)^{j+k}\,_{j+k}C_{k+l}\,_rC_j\,_nC_k\,_{s+n-j-k}C_{m-j}=(-1)^l\,_{n+r}C_{n+l}\,_{s-r}C_{m-n-l} \end{align*} • @BruceET: You're welcome. – Markus Scheuer Jun 29 '18 at 8:51 • Thank you for taking the time and effort to write such a nice answer! :) – Seankala Jun 29 '18 at 15:42 • @Sean: You're welcome. Thanks for your nice comment. :-) – Markus Scheuer Jun 29 '18 at 15:45 In books printed before $\LaTeX$ came to be widely used, setting ${n \choose r}$ into type was difficult (and expensive if used throughout a book). I 'know' this because of conversations with editors of math books over the years. By contrast, typesetting is not difficult for the variants of $C_r^n,$ including $C(n, r)$ not yet mentioned in this discussion. Nowadays, I think there is a trend to use ${n \choose r},$ except for authors who have a strong preference for C-notations and those writing in Microsoft Word and trying to avoid using its 'equation editor'. Curiously, a generally-accepted convention for permutations $P_r^n = r!{n \choose r}$ does not seem to have emerged. Feller used $(n)_r$ in his famous probability book, which one would have thought might have set a trend 60 years ago, but apparently not. His book may have set a record for the density per page of big parentheses for various uses of ${n \choose r}$-notation--even before $\LaTeX.$ There is no difference at all. Personally I prefer using the shortest notation aCb for obvious reasons.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707953529717, "lm_q1q2_score": 0.8559686254180844, "lm_q2_score": 0.8757869884059267, "openwebmath_perplexity": 772.823877360378, "openwebmath_score": 0.9258873462677002, "tags": null, "url": "https://math.stackexchange.com/questions/2835377/combination-notation-vs-binomial-coefficient-formula" }
• Are the "a" and "b" significant, there, or is it more the ${}_{\cdot}C_{\cdot}$ notation that you prefer? – Cameron Buie Jun 29 '18 at 2:56 • a and b are just numbers . Nothing more than that . Yes I prefer that notation because it's shorter and I see it very often – Kwnstantinos Nikoloutsos Jun 29 '18 at 2:59
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707953529717, "lm_q1q2_score": 0.8559686254180844, "lm_q2_score": 0.8757869884059267, "openwebmath_perplexity": 772.823877360378, "openwebmath_score": 0.9258873462677002, "tags": null, "url": "https://math.stackexchange.com/questions/2835377/combination-notation-vs-binomial-coefficient-formula" }
# Unify the sampling of NIntegrate[ {f, g, h} w ] I'm trying to numerically integrate a function which has a vector-valued slow part and a much faster component which is shared by all the components, i.e. an integral of the form $$\int_a^b\begin{pmatrix}f(x)\\ g(x) \\ h(x)\end{pmatrix}w(x)\,\text dx.$$ Because NIntegrate is nicely Listable on its first argument, I can feed it a list-valued argument without a problem. However, it appears to be doing each component as a completely separate integral, which results in a lot of work being re-calculated. For example, the (simplified) example integral samplePointsList = Reap[ NIntegrate[ {x, x^2, x^3} Cos[10 x + Cos[x]] , {x, 0, 5} , EvaluationMonitor :> Sow[x] ] ][[2, 1]]; gives the sample point diagram (through ListPlot[Transpose[{samplePointsList, Range[Length[samplePointsList]]}]]) It is clear that the kernel is doing the initial sampling, and then the further refinements, separately for each component. While the required sample points are not identical, there is a lot of shared work and I feel there is a fair bit of room for optimization there. I am aware that, since the sampling requirements of each component are slightly different, binding them completely will require more evaluations in some components than would be necessary. (For instance, in the example above, the extra detail required by the first component around $2\leq x\leq 3$ would be slowed down slightly if it was required to also calculate the second and third components there, though they do not require it.) However, the components in my case are similar enough that I do not think this would be an issue. Is there a way to force Mathematica into this sort of separation of the integrand and unification of the sampling? If not, is there a way to make it aware of the previously calculated values and sampling points which will speed up the process? One idea is to integrate once to get the sample points then compute the remaining integrals as sums:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9597620596782467, "lm_q1q2_score": 0.8559264113328141, "lm_q2_score": 0.8918110511888303, "openwebmath_perplexity": 1677.9384802879292, "openwebmath_score": 0.2977149188518524, "tags": null, "url": "https://mathematica.stackexchange.com/questions/48359/unify-the-sampling-of-nintegrate-f-g-h-w" }
One idea is to integrate once to get the sample points then compute the remaining integrals as sums: sample = Transpose@ SortBy[First@Last@Reap[ NIntegrate[x (c = Cos[10 x + Cos[x]]), {x, 0, 5}, EvaluationMonitor :> Sow[{x, c}]]], #[[1]] &]; wt = ((#[[3]] - #[[1]])/2) & /@ Partition[Join[{0}, sample[[1]], {5}], 3, 1]; wt.(sample[[2]] #) & /@ {sample[[1]], sample[[1]]^2, sample[[1]]^3} {0.0133333, 0.133275, 0.861541} Note the accuracy is not terribly good, NIntegrate gives: {0.0125266, 0.131514, 0.855716} Somethings a bit off in my quick&dirty trapezoid integration but i think this can be made to work. For this example there really is little benefit to NIntegrate's adaptive sampling so we might as well just use a uniform sampling: np = 651;(*assumed odd for simpsons rule*) a=5 b=0 wt = ( (a-b)/(np - 1) )/3 Join[{1}, Flatten@ConstantArray[{4, 2}, (np - 1)/2 - 1], {4, 1}] // N; x = b + (Range[0, np - 1] (a-b)/(np - 1)) // N; fast = Cos[10 # + Cos[#]] & /@ x // N; (# fast).wt & /@ {x, x^2, x^3} {0.0125266, 0.131514, 0.855716} Is there a way to force Mathematica into this sort of separation of the integrand and unification of the sampling? If not, is there a way to make it aware of the previously calculated values and sampling points which will speed up the process? are "yes" and "yes" as demonstrated and discussed in this post to "How to calculate the numerical integral more efficiently?". More concretely: Import["https://raw.githubusercontent.com/antononcube/\ MathematicaForPrediction/master/Misc/ArrayOfFunctionsRule.m"] funcExpr = {x, x^2, x^3} Cos[10 x + Cos[x]] (* {x Cos[10 x + Cos[x]], x^2 Cos[10 x + Cos[x]], x^3 Cos[10 x + Cos[x]]} *) funcMat = Table[i*funcExpr, {i, 100}]; AbsoluteTiming[ res0 = NIntegrate[funcMat, {x, 0, 5}]; ] res0[[12]] (* {1.711, Null} *) (* {0.150319, 1.57817, 10.2686} *)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9597620596782467, "lm_q1q2_score": 0.8559264113328141, "lm_q2_score": 0.8918110511888303, "openwebmath_perplexity": 1677.9384802879292, "openwebmath_score": 0.2977149188518524, "tags": null, "url": "https://mathematica.stackexchange.com/questions/48359/unify-the-sampling-of-nintegrate-f-g-h-w" }
(* {1.711, Null} *) (* {0.150319, 1.57817, 10.2686} *) AbsoluteTiming[ res1 = NIntegrate[1, {x, 0, 5}, Method -> {"GlobalAdaptive", "SingularityHandler" -> None, Method -> {ArrayOfFunctionsRule, "Functions" -> funcMat}}]; ] res1[[12]] (* {0.029007, Null} *) (* {0.150319, 1.57817, 10.2686} *) Norm[res0 - res1, 2] (* 2.03398*10^-12 *) • That's pretty interesting; I hope I can find time to test it out soon. In the meantime, can you comment on the relative gains as a function of the size of the matrix? I'm interested in $n\leq3$ but the other answer looks like it targets bigger arrays. – Emilio Pisanty Jan 9 '17 at 19:22 • @EmilioPisanty I am not sure what you mean -- performance of the ArrayOfFunctionsRule with respect to the size of the integrand? I did a similar comparison during the experiments for the linked MSE answer... – Anton Antonov Jan 10 '17 at 1:30
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9597620596782467, "lm_q1q2_score": 0.8559264113328141, "lm_q2_score": 0.8918110511888303, "openwebmath_perplexity": 1677.9384802879292, "openwebmath_score": 0.2977149188518524, "tags": null, "url": "https://mathematica.stackexchange.com/questions/48359/unify-the-sampling-of-nintegrate-f-g-h-w" }
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Jul 2018, 14:27 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A coin is tossed 7 times. Find the probability of getting Author Message TAGS: ### Hide Tags Manager Joined: 25 Dec 2011 Posts: 54 GMAT Date: 05-31-2012 A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 07 Jan 2012, 13:47 4 15 00:00 Difficulty: 75% (hard) Question Stats: 58% (01:32) correct 42% (01:40) wrong based on 498 sessions ### HideShow timer Statistics A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? A. 1/2 B. 63/128 C. 4/7 D. 61/256 E. 63/64 Hi I want to understand why combination has been used in the below problem. I thought permutation is used for order and - for probability one needs to find the number of outcomes as well as order of the outcomes. For example if the question was - probability of getting 1 head then we would have put it as follows :- HTTTTTT THTTTTT TTHTTTT TTTHTTT TTTTHHH TTTTTHH TTTTTTH Then why in the below problem - we use combinations and not permutations ? I am so confused. A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? B. 63/128 C. 4/7 D. 61/256
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
B. 63/128 C. 4/7 D. 61/256 Explanation ANS. (a) ( Total outcomes= 2^7 = 128, Number outcomes for which heads are more than tails = 7 combination 4 (Heads=4 & Tails=3) + 7 combination 5 + 7 combination 6 + 7 combination 7) = 35+21+7+1= 64, so probability of getting more heads = 64/128 = ½) Math Expert Joined: 02 Sep 2009 Posts: 47112 ### Show Tags 12 Jan 2012, 05:34 7 6 A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? A. 1/2 B. 63/128 C. 4/7 D. 61/256 E. 63/64 Assuming the coin is fair - P(H)=P(T)=1/2 We can do as proposed by the explanation in your initial post: Total outcomes: 2^7 Favorable outcomes: 4 heads --> combination of HHHHTTT --> 7!/(4!*3!)=35 (# of permutation of 7 letters out of which 4 H's and 3 T's are identical); 5 heads --> combination of HHHHHTT --> 7!/(5!*2!)=21; 6 heads --> combination of HHHHHHT --> 7!/(6!*1!)=7; 7 heads --> combination of HHHHHHH --> 1; P(H>T)=Favorable outcomes/Total outcomes=(35+21+7+1)/2^7=1/2. BUT: there is MUCH simpler and elegant way to solve this question. Since the probability of getting either heads or tails is equal (1/2) and a tie in 7 (odd) tosses is not possible then the probability of getting more heads than tails = to the probability of getting more tails than heads = 1/2. How else? Does the probability favor any of tails or heads? (The distribution of the probabilities is symmetrical: P(H=7)=P(T=7), P(H=5)=P(T=5), ... also P(H>4)=P(T>4)) If it were: A fair coin is tossed 8 times. Find the probability of getting more heads than tails in all 8 tosses?
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
Now, almost the same here: as 8 is even then a tie is possible but again as distribution is symmetrical then $$P(H>T)=\frac{1-P(H=T)}{2}=P(T>H)$$ (so we just subtract the probability of a tie and then divide the given value by 2 as P(H>T)=P(H<T)). As $$P(H=T)=\frac{8!}{4!*4!}=70$$ (# of permutation of 8 letters HHHHTTTT, out of which 4 H's and H T's are identical) then $$P(H>T)=\frac{1-P(H=T)}{2}=\frac{1-\frac{70}{2^8}}{2}=\frac{93}{256}$$. You can check this in following way: total # of outcomes = 2^8=256, out of which in 70 cases there will be a tie, in 93 cases H>T and also in 93 cases T>H --> 70+93+93=256. Hope it's clear. Similar questions for practice: probability-question-100222.html?hilit=coin%20tossed#p772756 hard-probability-99478.html?hilit=coin%20tossed some-ps-questions-need-explanation-99282.html?hilit=coin%20tossed probability-question-gmatprep-85802.html?hilit=coin%20tossed _________________ ##### General Discussion Manager Joined: 25 Dec 2011 Posts: 54 GMAT Date: 05-31-2012 ### Show Tags 07 Jan 2012, 13:49 1 Hi Sorry ...1 head out of 7 can be arranged in following ways :- HTTTTTT THTTTTT TTHTTTT TTTHTTT TTTTHTT TTTTTHT TTTTTTH Math Expert Joined: 02 Sep 2009 Posts: 47112 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 07 Jun 2013, 07:03 Bumping for review and further discussion*. Get a kudos point for an alternative solution! *New project from GMAT Club!!! Check HERE Theory on probability problems: math-probability-87244.html All DS probability problems to practice: search.php?search_id=tag&tag_id=33 All PS probability problems to practice: search.php?search_id=tag&tag_id=54 Tough probability questions: hardest-area-questions-probability-and-combinations-101361.html
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
Tough probability questions: hardest-area-questions-probability-and-combinations-101361.html _________________ Senior Manager Joined: 28 Apr 2012 Posts: 298 Location: India Concentration: Finance, Technology GMAT 1: 650 Q48 V31 GMAT 2: 770 Q50 V47 WE: Information Technology (Computer Software) Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 07 Jun 2013, 08:48 2 Probability of getting Head, P(H) = 1/2 For No. of heads > no. of Tails, there can be 4,5,6 or 7 Heads. As per GMAT Club math book, probability of occurrence of an event k times in a n time sequence, for independent and mutually exclusive events: $$P=C(n,k)*p^k*(1-p)^n^-^k$$ $$P(H=4) = C(7,4)*(1/2)^4*(1/2)^3 = C(7,4) * (1/2)^7$$ $$P(H=5) = C(7,5) * (1/2)^7$$ $$P(H=6) = C(7,6) * (1/2)^7$$ $$P(H=7) = C(7,7) * (1/2)^7$$ Total $$P(H > 3) = P(H=4) + P(H=5) + P(H=6) + P(H=7)$$ $$= [ C(7,4) + C(7,5) + C(7,6) + C(7,7)] * (1/2)^7$$ $$= (35 + 21 + 7 + 1) / 128$$ $$= 64/128$$ $$= 1/2$$ _________________ "Appreciation is a wonderful thing. It makes what is excellent in others belong to us as well." ― Voltaire Press Kudos, if I have helped. Thanks! Intern Joined: 14 Oct 2013 Posts: 5 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 05 May 2014, 05:47 My approach 7 toss coin has 2 out comes H or T now basically if i see this a game where there are 2 teams one selects Heads and other team selects Tails. If in 7 toss whichever comes maximum (heads or Tails) the corresponding team wins.. Clearly the probability is 50% for both the cases max Heads or max Tails............ Intern Joined: 14 Jul 2014 Posts: 17 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
### Show Tags 01 May 2015, 02:54 For those who have trouble grasping the concept behind this - check out Khan Academy lessons on probability and combinatorics. I'm not allowed to post urls as a newbie but a simple Google search will throw up the link. I couldn't make head or tail of these questions before seeing them, tried memorizing the formulae and always messed up. I invested 3 hours in going through those videos and can now solve these questions without knowing any formulae - it's all conceptual. Sal's great with breaking down concepts to simple, relatable stuff. Manager Joined: 22 Apr 2015 Posts: 64 A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 01 May 2015, 05:11 2 DipikaP wrote: For those who have trouble grasping the concept behind this - check out Khan Academy lessons on probability and combinatorics. I'm not allowed to post urls as a newbie but a simple Google search will throw up the link. I couldn't make head or tail of these questions before seeing them, tried memorizing the formulae and always messed up. I invested 3 hours in going through those videos and can now solve these questions without knowing any formulae - it's all conceptual. Sal's great with breaking down concepts to simple, relatable stuff. Hi Dipika, You are totally right when you say "it is all conceptual". Trying to memorise formulae is not the right approach. For example in this question: First we should think what are the possible outcomes when we toss a coin: head or tail (2 outcomes) Now as the coin is fair, the probability that we will get a head or a tail is 1/2 To illustrate, let's take a smaller version of the above question: What is the probability of getting more heads in 3 tosses? 1st case: We can get 3 heads: HHH Probability of HHH = 1/2 * 1/2 * 1/2 = 1/8 Probability of getting 3 heads = 1/8
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
2nd case: We can get two heads: HHT, HTH, THH Probability of HHT = 1/2 * 1/2 * 1/2 = 1/8 Probability of HTH = 1/2 * 1/2 * 1/2 = 1/8 Probability of THH = 1/2 * 1/2 * 1/2 = 1/8 Probability of getting 2 heads = 3* 1/8 As you can see HHT, HTH and THH are different arrangements of HHT Probability of getting 2 heads = (No. of arrangements of HHT)* (Probability of getting HHT) = 3!/2! * (1/2 * 1/2 * 1/2) = 3/8 Total probability of getting more heads in 3 tosses = 1/8 + 3/8 = 4/8 = 1/2 Thinking on these lines you can solve the above question easily. But if you think a little further, we are talking about odd number of tosses (3 and 7). So, either there will be more heads or more tails. It is not possible to get equal number of heads and tails. Hence, in half of the outcomes we will get more heads than tails and in the the other half we will have more tails than heads. Thus, the probability of getting more heads = probability of getting more tails = 1/2. Intern Joined: 03 Jul 2015 Posts: 10 Location: India Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 04 Aug 2015, 12:50 morya003 wrote: A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? A. 1/2 B. 63/128 C. 4/7 D. 61/256 E. 63/64 Hi I want to understand why combination has been used in the below problem. I thought permutation is used for order and - for probability one needs to find the number of outcomes as well as order of the outcomes. For example if the question was - probability of getting 1 head then we would have put it as follows :- HTTTTTT THTTTTT TTHTTTT TTTHTTT TTTTHHH TTTTTHH TTTTTTH Then why in the below problem - we use combinations and not permutations ? I am so confused. A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? B. 63/128 C. 4/7 D. 61/256
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
B. 63/128 C. 4/7 D. 61/256 Explanation ANS. (a) ( Total outcomes= 2^7 = 128, Number outcomes for which heads are more than tails = 7 combination 4 (Heads=4 & Tails=3) + 7 combination 5 + 7 combination 6 + 7 combination 7) = 35+21+7+1= 64, so probability of getting more heads = 64/128 = ½) Since number of times coin has been tossed is 7, either number of heads will be more than tails or vice versa. There is no way number of heads become equal to number of tails. Since head and tails are equally favourable outcome of a coin, possibility of getting more heads = possiblity of getting more tails = 1/2. In other words there are only 2 equally likely events (event 1: heads more than tails, event2: tails more than heads) constituting all the outcomes. Current Student Status: DONE! Joined: 05 Sep 2016 Posts: 398 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 07 Dec 2016, 20:46 Total outcome = 2^7 = 128 HHHHHHH = 7!/7! = 1 HHHHHHT = 7!/1!6! = 7 HHHHHTT = 7!/2!5! = 21 HHHHTTT = 7!/3!4! = 35 1+7+21+35 = 36 36/128 = 1/2 A. Intern Joined: 22 Nov 2016 Posts: 13 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 02 Feb 2017, 04:42 hi bunnel... As stated in the question" Find the probability of getting more heads than tails in all 7 tosses?" tails in all the tosses means no heads ie 0 heads and more than that means at least one heads. so we have to find the the solution for atleast one heads in 7 tosses... can we restate the question as above? pls explain thank u.. Math Expert Joined: 02 Sep 2009 Posts: 47112 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
### Show Tags 02 Feb 2017, 06:12 manojhanagandi wrote: hi bunnel... As stated in the question" Find the probability of getting more heads than tails in all 7 tosses?" tails in all the tosses means no heads ie 0 heads and more than that means at least one heads. so we have to find the the solution for atleast one heads in 7 tosses... can we restate the question as above? pls explain thank u.. If you read the solutions above you'll see that this is not correct. More heads than tails in all 7 tosses means at least 4 heads (so more than half must be heads): HHHHTTT HHHHHTT HHHHHHT HHHHHHH _________________ Intern Joined: 22 Nov 2016 Posts: 13 Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 03 Feb 2017, 04:22 Thank You bunuel... Senior Manager Joined: 29 Jun 2017 Posts: 497 GPA: 4 WE: Engineering (Transportation) Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 06 Sep 2017, 00:48 Ans is A HHHHTTT= $$(0.5)^4$$ x $$(0.5)^3$$ {7C4} similarly for 5 H , 6H and 7 H we will calculate only 7C5 , 7C6 and 7C7 will change other part (0.5)^7 remains same which is multiplied, so add all 4 cases of 4H 5H 6H 7H $$(0.5)^7$$ x {7C4+7C5+7C6 +7C7} $$\frac{64}{128}$$= $$\frac{1}{2}$$ _________________ Give Kudos for correct answer and/or if you like the solution. Director Joined: 13 Mar 2017 Posts: 610 Location: India Concentration: General Management, Entrepreneurship GPA: 3.8 WE: Engineering (Energy and Utilities) Re: A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 06 Sep 2017, 03:39 morya003 wrote: A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? A. 1/2 B. 63/128 C. 4/7 D. 61/256 E. 63/64
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
A. 1/2 B. 63/128 C. 4/7 D. 61/256 E. 63/64 Hi I want to understand why combination has been used in the below problem. I thought permutation is used for order and - for probability one needs to find the number of outcomes as well as order of the outcomes. For example if the question was - probability of getting 1 head then we would have put it as follows :- HTTTTTT THTTTTT TTHTTTT TTTHTTT TTTTHHH TTTTTHH TTTTTTH Then why in the below problem - we use combinations and not permutations ? I am so confused. A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? B. 63/128 C. 4/7 D. 61/256 Explanation ANS. (a) ( Total outcomes= 2^7 = 128, Number outcomes for which heads are more than tails = 7 combination 4 (Heads=4 & Tails=3) + 7 combination 5 + 7 combination 6 + 7 combination 7) = 35+21+7+1= 64, so probability of getting more heads = 64/128 = ½) Now this is a very simple question and can be solved without doing all the long calculations. Here important thing to notice is P(H) = P(T) = 1/2 in a single toss. Now 7 Toss can have following possible ways 0H 7T say x way 1H 6T say y way 2H 5T say z way 3H 4T say w way -------------------------- 4H 3T w way 5H 2T z way 6H 1T x way 7H 0T x way So 4 out of 8 ways will have heads greater than Tail. Also P(H) = P(T) = 1/2 = i.e. equal in a single toss. So, we don't need to calculate the number of ways for each case. Require probability = (w+z+x+y)[/(x+y+z+w) +(w+z+x+y)] = 1/2 _________________ CAT 99th percentiler : VA 97.27 | DI-LR 96.84 | QA 98.04 | OA 98.95 UPSC Aspirants : Get my app UPSC Important News Reader from Play store. MBA Social Network : WebMaggu Appreciate by Clicking +1 Kudos ( Lets be more generous friends.) What I believe is : "Nothing is Impossible, Even Impossible says I'm Possible" : "Stay Hungry, Stay Foolish".
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
DS Forum Moderator Joined: 27 Oct 2017 Posts: 621 Location: India GPA: 3.64 WE: Business Development (Energy and Utilities) A coin is tossed 7 times. Find the probability of getting  [#permalink] ### Show Tags 20 Dec 2017, 09:22 My approach: either Number of heads can be more or less than tails. They can't be equal as no of toss is 7 which is odd. Hence probability is (no of cases when no of head is more) /((no of cases when no of head is more)+(no of cases when no of head is more))= 1/2 provide kudos, if u like my approach Bunuel wrote: A coin is tossed 7 times. Find the probability of getting more heads than tails in all 7 tosses? A. 1/2 B. 63/128 C. 4/7 D. 61/256 E. 63/64 Assuming the coin is fair - P(H)=P(T)=1/2 We can do as proposed by the explanation in your initial post: Total outcomes: 2^7 Favorable outcomes: 4 heads --> combination of HHHHTTT --> 7!/(4!*3!)=35 (# of permutation of 7 letters out of which 4 H's and 3 T's are identical); 5 heads --> combination of HHHHHTT --> 7!/(5!*2!)=21; 6 heads --> combination of HHHHHHT --> 7!/(6!*1!)=7; 7 heads --> combination of HHHHHHH --> 1; P(H>T)=Favorable outcomes/Total outcomes=(35+21+7+1)/2^7=1/2. BUT: there is MUCH simpler and elegant way to solve this question. Since the probability of getting either heads or tails is equal (1/2) and a tie in 7 (odd) tosses is not possible then the probability of getting more heads than tails = to the probability of getting more tails than heads = 1/2. How else? Does the probability favor any of tails or heads? (The distribution of the probabilities is symmetrical: P(H=7)=P(T=7), P(H=5)=P(T=5), ... also P(H>4)=P(T>4)) If it were: A fair coin is tossed 8 times. Find the probability of getting more heads than tails in all 8 tosses?
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
Now, almost the same here: as 8 is even then a tie is possible but again as distribution is symmetrical then $$P(H>T)=\frac{1-P(H=T)}{2}=P(T>H)$$ (so we just subtract the probability of a tie and then divide the given value by 2 as P(H>T)=P(H<T)). As $$P(H=T)=\frac{8!}{4!*4!}=70$$ (# of permutation of 8 letters HHHHTTTT, out of which 4 H's and H T's are identical) then $$P(H>T)=\frac{1-P(H=T)}{2}=\frac{1-\frac{70}{2^8}}{2}=\frac{93}{256}$$. You can check this in following way: total # of outcomes = 2^8=256, out of which in 70 cases there will be a tie, in 93 cases H>T and also in 93 cases T>H --> 70+93+93=256. Hope it's clear. Similar questions for practice: http://gmatclub.com/forum/probability-q ... ed#p772756 http://gmatclub.com/forum/hard-probabil ... n%20tossed http://gmatclub.com/forum/some-ps-quest ... n%20tossed http://gmatclub.com/forum/probability-q ... n%20tossed _________________ A coin is tossed 7 times. Find the probability of getting &nbs [#permalink] 20 Dec 2017, 09:22 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787849789998, "lm_q1q2_score": 0.85591931976581, "lm_q2_score": 0.8670357718273068, "openwebmath_perplexity": 1722.2541678187697, "openwebmath_score": 0.8059019446372986, "tags": null, "url": "https://gmatclub.com/forum/a-coin-is-tossed-7-times-find-the-probability-of-getting-125693.html" }
# Basic question regarding limits I'm a little confused when it comes to question like this. Let's say we got this expression: $$\lim_{x\to\infty} \tan\left(\frac{1}{x}\right).$$ Am I allowed to say the result of this is $0$ or do I have to show it? (If I have to show it, please show me how to cuz I got no clue). And more general, in which situations are we allowed to "cut" it and in which we are not? (I'm guessing whenever it comes to fractions surely). That depends. If the whole exercise is just "compute $\lim \tan(1/x)$", then yes, you have to do some argument. Something along the lines of "if $x$ goes to infinity, then $1/x$ goes to zero, and by continuity of $\tan$, the whole expression then goes to zero". But if this limit is just one part of a longer question/calculation, then no. It's trivial enough that anybody with some math background will see it immediately. • Appreciated . Got only one more question . Is there any like more formal ways of prooving it is 0 ? Cuz I never did any good solving math problem using words tho I completely see where ur going at. – James Groon Apr 1 '17 at 18:24 • In general feel free to replace the word "then" or the phrase "it then follows" with the symbol "$\implies$" and "there is an $x$" with "$\exists x$" and such. Using such symbols instead of words can make a proof more concise, though overdoing it can make it hard to read. And if you want a more elementary proof, the answer by @Chris-Varghese is quite nice. – Simon Apr 1 '17 at 18:34 • Gotcha and yep, Chris did a good job :) – James Groon Apr 1 '17 at 18:35
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787885624276, "lm_q1q2_score": 0.8559193194810677, "lm_q2_score": 0.8670357683915538, "openwebmath_perplexity": 499.3459725129674, "openwebmath_score": 0.7982875108718872, "tags": null, "url": "https://math.stackexchange.com/questions/2213431/basic-question-regarding-limits/2213440" }
You need to show that given any small $\epsilon > 0$, there exists a $X(\epsilon)$ such that $|\tan(1/x)| < \epsilon,\; \forall x > X(\epsilon)$. To do this choose, for e.g., $X(\epsilon) = \dfrac{1+\sec \epsilon}{\epsilon}$. Then $$\left|\tan \frac{1}{x}\right| < \left|\tan \frac{\epsilon}{1+\sec \epsilon} \right| = \frac{\sin \left( \frac{\epsilon}{1+\sec \epsilon}\right) }{\cos\left( \frac{\epsilon}{1+\sec \epsilon}\right)} < \frac{\sin \left( \frac{\epsilon}{1+\sec \epsilon}\right) }{\cos\epsilon} < \frac{ \frac{\epsilon}{1+\sec \epsilon} }{\cos\epsilon} = \frac{\epsilon}{1 + \cos \epsilon} < \epsilon$$ • The inequality is wrong. Note that $|\tan (x)|\ge |x|$ for all $x\in (-\pi/2,\pi/2)$. Simple example; $x=\pi/4<1$. We have $\tan(\pi/4)=1>\pi/4$. – Mark Viola Apr 1 '17 at 18:48 • So may i get the correct answer ? – James Groon Apr 1 '17 at 19:02 • @Dr.MV Which inequality in my answer are you referring to? Indeed, $|\tan x | \geq |x| \forall x \in (-\pi/2, \pi/2)$. How does that fact contradict anything that I have in my answer? – ChargeShivers Apr 2 '17 at 1:01 • @chrisvarghese You wrote $|\tan(\epsilon)|\le \epsilon$. That is false. – Mark Viola Apr 2 '17 at 2:58 • @Dr.MV Got it now. Thanks. Not sure how I missed it twice!! Answer edited. – ChargeShivers Apr 2 '17 at 17:17 The answer is that it depends on the situation. To address the statement in the OP "If I have to show it, please show me how to cuz I got no clue," I thought it might be instructive to present an approach that relies on a standard inequality from elementary geometry. To that end, we begin with a short primer. PRIMER: Recall from elementary geometry the inequality $$\sin(\theta)\le \theta\tag 1$$ for $\theta\ge 0$. Squaring both sides of $(1)$, using $\sin^2(\theta)=1-\cos^2(\theta)$, and rearranging, we find that $$\cos(\theta)\ge \sqrt{1-\theta^2} \tag 2$$ for $0\le \theta \le 1$. Now, using $(1)$ and $(2)$ with $\theta=1/x$ reveals that
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787885624276, "lm_q1q2_score": 0.8559193194810677, "lm_q2_score": 0.8670357683915538, "openwebmath_perplexity": 499.3459725129674, "openwebmath_score": 0.7982875108718872, "tags": null, "url": "https://math.stackexchange.com/questions/2213431/basic-question-regarding-limits/2213440" }
for $0\le \theta \le 1$. Now, using $(1)$ and $(2)$ with $\theta=1/x$ reveals that \begin{align} \tan(1/x)&=\frac{\sin(1/x)}{\cos(1/x)}\\\\ &\le \frac{1/x}{\sqrt{1-\frac1{x^2}}}\\\\ &=\frac{1}{\sqrt{x^2-1}}\tag 3 \end{align} for $x>1$. Hence, for all $\epsilon>0$, we have from $(3)$ that \begin{align} \tan(1/x)&\le \frac{1}{\sqrt{x^2-1}}\\\\ &<\epsilon \end{align} whenever $x>\sqrt{1+\frac{1}{\epsilon^2}}$. And we are done! • $\sqrt{1 + \frac{1}{\epsilon^2}} = \dfrac{\sqrt{1+\epsilon^2}}{\epsilon}$ is definitely a better $X(\epsilon)$ than $\dfrac{1+\sec \epsilon}{\epsilon}$. May be the 'best' of all is $X(\epsilon) = \dfrac{1}{\arctan \epsilon}$. – ChargeShivers Apr 2 '17 at 23:41 Short answer is $0$ ,but if you want " to show " recall the limit $$\lim _{ x\rightarrow \infty }{ \frac { 1 }{ x } =0 }$$ You can move the limit into the $tan$, because you are not excluding any $x$ from the limit, and you are not creating any limits that don't exist. Once that is done, the problem is trivial.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787885624276, "lm_q1q2_score": 0.8559193194810677, "lm_q2_score": 0.8670357683915538, "openwebmath_perplexity": 499.3459725129674, "openwebmath_score": 0.7982875108718872, "tags": null, "url": "https://math.stackexchange.com/questions/2213431/basic-question-regarding-limits/2213440" }
# Application of 2nd fundamental theorem of calculus I would like to clarify the usage of the 2nd fundamental theorem of calculus, in 3 parts. For all 3 parts, consider $F'(a)$ to be $$F'(a) = \frac{1}{1+a+a^2}$$ The questions are: 1. Find $\frac{d}{dy}\int^y_1 F'(a) \,da$ 2. Find $\frac{d}{dy}\int^1_y F'(a) \,da$ 3. Find $\frac{d}{dy}\int^{y^2}_1 F'(a)\, da$ My working for each of the questions are: 1. \begin{align*} \frac{d}{dy} \int^y_1 F'(a) \,da &= \frac{d}{dy}\begin{bmatrix}F(y) - F(1)\end{bmatrix} \\&=F'(y) \\&=\frac{1}{1+y+y^2}. \end{align*} 2. \begin{align*} \frac{d}{dy} \int^1_y F'(a) \,da &= \frac{d}{dy}\begin{bmatrix}F(1) - F(y)\end{bmatrix} \\&=-F'(y) \\&=\frac{-1}{1+y+y^2}. \end{align*} 3. \begin{align*} \frac{d}{dy} \int^{y^2}_1 F'(a) \,da &= \frac{d}{dy}\begin{bmatrix}F(y^2) - F(1)\end{bmatrix} \\&=F'(y^2) \\&=\frac{2y}{1+y^2+y^4}. \end{align*} UPDATE: I shall update (3) with the chain rule working. \begin{align*} \frac{d}{dy} \int^{y^2}_1 F'(a) \,da &= \frac{d}{dy}\begin{bmatrix}F(y^2) - F(1)\end{bmatrix} \\&=F'(y^2)(2y) \\&=\frac{1}{1+(y^2)+(y^2)^2} (2y) \tag{by chain rule} \\&=\frac{2y}{1+y^2+y^4}. \end{align*} - The third one would need a chain rule. Though, you do have used it for final answer. –  hjpotter92 Mar 8 '13 at 12:59 Your answers are correct. However, one should note that$\dfrac{dF(y^2)}{dy}=F'(y^2)\cdot2y$(Chain rule). Your final answers are perfectly fine but the intermediate step is wrong.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787868650145, "lm_q1q2_score": 0.8559193180093497, "lm_q2_score": 0.8670357683915538, "openwebmath_perplexity": 9900.99389819685, "openwebmath_score": 0.9998019337654114, "tags": null, "url": "http://math.stackexchange.com/questions/324666/application-of-2nd-fundamental-theorem-of-calculus" }
# Prove $\frac{\cos^2 A}{1 - \sin A} = 1 + \sin A$ by the Pythagorean theorem. How do I use the Pythagorean Theorem to prove that $$\frac{\cos^2 A}{1 - \sin A} = 1 + \sin A?$$ - if i multiply it would it show that A is an acute angle? i have a given figure here and its a right triangle –  user83562 Jun 23 '13 at 14:25 Do you know the identity $\cos^2 A + \sin^2 A = 1$? –  Javier Badia Jun 23 '13 at 14:58 suppose there is a right angle tri. having b is hypo.,c is base ,a is perpendicular than $\cos A=\frac cb$ and $\sin A=\frac ab$ using pythgo. theo. $\;b^2=a^2+c^2$ $$\dfrac {\cos^2 A}{1-\sin A}$$ $$\dfrac {\frac {c^2}{b^2}}{1-\frac {a}{b}}$$ $$\dfrac {\frac {c^2}{b^2}}{\frac {b-a}{b}}$$ $$\dfrac {c^2b}{b^2(b-a)}$$ $$\dfrac {c^2}{b(b-a)}$$ $$\dfrac {b^2-a^2}{b(b-a)}$$ $$\dfrac {b+a}{b}$$ $$1+\dfrac {a}{b}$$ $$1+\sin A$$ alternatively $$\dfrac {\cos^2 A}{1-\sin A}$$ $$\dfrac {1-\sin^2 A}{1-\sin A}$$ $$\dfrac {(1-\sin A)(1+\sin A)}{1-\sin A}$$ $${1+\sin A}$$ - why don't you make things precise : $$1=\frac{c^2}{b^2}+\frac{a^2}{b^2}=\cos^2A+\sin^2A$$ $$\implies \cos^2A=1-\sin^2A=(1-\sin A)(1+\sin A)$$ $$\implies \frac{\cos^2A}{1-\sin A}=1+\sin A$$ –  lab bhattacharjee Jun 23 '13 at 14:32 how did it become 1-sin^2 A? –  user83562 Jun 23 '13 at 14:49 @user83562: Did you read lab's (excellent) comment? Since $$1=\cos^2A+\sin^2A,$$ then $$\cos^2A=1-\sin^2A.$$ By difference of squares formula, we have $$\cos^2A=(1-\sin A)(1+\sin A).$$ From there, it's almost immediate. –  Cameron Buie Jun 23 '13 at 15:18 If you allow yourself some simple algebra, then you see that $$\frac{\cos^2 A}{1-\sin A} = 1+\sin A\text{ is equivalent to } \cos^2 A = (1+\sin A)(1-\sin A)$$ and that last expression is equal to $1-\sin^2 A$. So you're asking how to prove $\cos^2 A=1-\sin^2 A$. And that's the same as proving $\cos^2 A + \sin^2 A = 1$. So the question is: How can you prove that $\cos^2 A + \sin^2 A = 1$ by using the Pythagorean theorem?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787834701878, "lm_q1q2_score": 0.8559193133700623, "lm_q2_score": 0.8670357666736772, "openwebmath_perplexity": 505.9458982731516, "openwebmath_score": 0.9990672469139099, "tags": null, "url": "http://math.stackexchange.com/questions/427559/prove-frac-cos2-a1-sin-a-1-sin-a-by-the-pythagorean-theorem" }
If you know that $\sin =\dfrac{\mathrm{opposite}}{\mathrm{hypotenuse}}$ and $\cos=\dfrac{\mathrm{adjacent}}{\mathrm{hypotenuse}}$, then this becomes easy if you consider a right triangle in which the length of the hypotenuse is $1$. Then you have $\sin=\mathrm{opposite}$ and $\cos=\mathrm{adjacent}$. Now the Pythagorean theorem says that $$\mathrm{opposite}^2 + \mathrm{adjacent}^2 = 1^2.$$ - Let's simplify matters: In a triangle with hypotenuse equal to $1$ (think of the unit circle, an angle $A$ between the $x$ axis and the hypotenuse, we know that $$\sin A=\frac{\text{opposite}}{\text{hypotenuse}}\quad \text{and}\quad \cos A=\frac{\text{adjacent}}{\text{hypotenuse}}$$ then, since hypotenuse $=1$, we have the leg opposite the angle $A$ given by $\sin A=\text{opposite}/1$ and the leg along the x-axis of length $\cos A=\text{adjacent}/1$. Now, by the Pythagorean Theorem, and substitution, we have that \begin{align}\text{opposite}^2 + \text{adjacent}^2 &= \text{hypotenuse}^2 = 1^2 \\ \sin^2A +\cos^2 A & = 1\end{align} This gives us the well-known identity: $$\sin^2A + \cos^2 A = 1\tag{1}$$ We can express this identity in terms of $\cos^2 A$ by subtracting $\sin^2 A$ from both sides of the identity to get \begin{align}\cos^2 A & = 1 - \sin^2 A \\ &= (1)^2 - (\sin A)^2\tag{2}\end{align} Now, we know that for any difference of squares, we can factor as follows: $$(x^2 - y^2) = (x +y)(x - y)\tag{3}$$ Since equation $(2)$ is a difference of squares, we have that \begin{align}\cos^2 A &= 1 - \sin^2 A \\ & = (1)^2 - (\sin A)^2 \\ &= (1 +\sin A)(1 - \sin A)\tag{4}\end{align} Substituting gives us: \begin{align}\frac{\cos^2 A}{1 -\sin A} & = \frac{1 - \sin^2 A}{1 -\sin A} \\ \\ &= \frac{(1 + \sin A)(\color{blue}{\bf 1 - \sin A})}{\color{blue}{\bf 1 -\sin A}}\\ \\ & = 1 + \sin A\end{align}
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787834701878, "lm_q1q2_score": 0.8559193133700623, "lm_q2_score": 0.8670357666736772, "openwebmath_perplexity": 505.9458982731516, "openwebmath_score": 0.9990672469139099, "tags": null, "url": "http://math.stackexchange.com/questions/427559/prove-frac-cos2-a1-sin-a-1-sin-a-by-the-pythagorean-theorem" }
- You begin with "Given the well known identity,....", but the questions seems to be how to get that well known identity from the Pythagorean theorem. –  Michael Hardy Jun 23 '13 at 16:14 @amWhy: You are always welcome my friend! Time for another cheesy movie (trying to add other activities in my life). ;-) –  Amzoti Jun 24 '13 at 2:47 Let us denote base of right angle triangle as b, perpendicular ( height ) as p, and hypotenuse as h, $$\cos A = \frac{b}{h} ; \qquad \sin A = \frac{p}{h}\tag{i}$$ Therefore, $$\frac{\cos^2A}{1-\sin A} = \frac{\frac{b^2}{h^2}}{1-\frac{p}{h}}$$ [By putting the values of $\cos A$ and $\sin A$ from $(i)$] Which after simplification gives you: $$\frac{b^2}{h(h-p)}\tag{ii}$$ Now as $$b^2 = h^2-p^2\quad\text{[using pythagoreous theorem]}\tag{iii}$$ By putting the value of $b^2$ from (iii) in (ii) you get : $$\frac{h^2-p^2}{h(h-p)} = \frac{h+p}{h} = 1+ \frac{p}{h} = 1+ \sin A$$ (hence proved) - not sure how much this derivation from the Left hand side to the Right is required, as this is often not the natural derivation. For example, please find my comment in the other answer. –  lab bhattacharjee Jun 23 '13 at 14:38
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787834701878, "lm_q1q2_score": 0.8559193133700623, "lm_q2_score": 0.8670357666736772, "openwebmath_perplexity": 505.9458982731516, "openwebmath_score": 0.9990672469139099, "tags": null, "url": "http://math.stackexchange.com/questions/427559/prove-frac-cos2-a1-sin-a-1-sin-a-by-the-pythagorean-theorem" }
Take a look! Derivative of a constant function. does not change no matter which member of the The graph of a constant function is always a This function may seem a little tricky at first but is actually the easiest one in this set of examples. methods and materials. Always returns to the fixed price. For all functions g, h : C → A, f o g = f o h, (where "o" denotes function composition). results in a zero change in x constant function x f ( x 1 ) = f ( x 2 ) for any x 1 and x 2 in the domain. Constant functions can be characterized with respect to function composition in two ways. Note: This function also works with class constants. This constant expresses an ambiguity inherent in the construction of antiderivatives. In order to create a pricing plan with "Constant", select control: (1). 4.9/5.0 Satisfaction Rating over the last 100,000 sessions. This tutorial shows you a great approach to thinking about functions! Yes, a constant function is a periodic function with any T∈R as its period (as f (x)=f (x+ T) always for howsoever small 'T' you can find). Real Functions: Constant Functions An constant function is a function that always returns the same constant value. The identity function and the absolute value function. Example: in "x + 5 = 9", 5 and 9 are constants. We can make use of this interest to encourage a little thinking. *Response times vary by subject and question complexity. Your first 30 minutes with a Chegg tutor is free! What's a Function? A constant function of a single variable, such as () =, has a graph of a horizontal straight line parallel to the x-axis.Such a function always takes the same value (in this case, 5), because its argument does not appear in the expression defining the function. A constant term in an expression or equation contains no variables. In the context where it is defined, the derivative of a function is a measure of the rate of change of function values with respect to change in input values. ) You can use direct substitution
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
change of function values with respect to change in input values. ) You can use direct substitution to arrive at this conclusion. In a two dimensional plane, the graph of this type of function is a straight, horizontal line. The derivative of this type of function is just zero. For example, the function y(x) = 4 is a constant function because the value of y(x) is 4 regardless of the input value x (see image). C++ is similar to C but has more features than C. Therefore, it is called a subset of C language. In calculus, the constant of integration, often denoted by , is a constant added to the end of an antiderivative of a function () to indicate that the indefinite integral of () (i.e., the set of all antiderivatives of ()), on a connected domain, is only defined up to an additive constant. A constant function is a function of the form f(x) = c, where c is a constant. A function becomes const when the const keyword is used in the function’s declaration. Constant member function is an accessory function which cannot modifying values of data members. ( variables and constant both carries some defined value but difference comes with their accessibility and value constant can be define as 1 define constant_name constant_value or 2 const constant_name constant_value the second way is faster A constant function is a special type of linear function that follows the form f(x) = b 'b' is the y-intercept of the line and is just a constant; A constant function is a linear function whose slope is 0; No matter what value of 'x' you choose, the value of the function will always be the same Online Help: Math Apps: Functions and Relations.Retrieved from https://www.maplesoft.com/support/help/Maple/view.aspx?path=MathApps%2FConstantFunction on May 24, 2019. For example, y = 10 is a form of constant function. Varsity Tutors © 2007 - 2021 All Rights Reserved, CBEST - The California Basic Educational Skills Test Courses & Classes, SAT Subject Test in World History Test Prep,
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
Basic Educational Skills Test Courses & Classes, SAT Subject Test in World History Test Prep, International Sports Sciences Association Test Prep. Award-Winning claim based on CBS Local and Houston Press awards. Consider constants as having a variable raised to the power zero. It perform constant operation. Now we shall prove this constant function with the help of the definition of derivative or differentiation. Constant Functions. The form of this particular function is: f(x) = ax 2 + bx + c, where a, b, and c are constants. With a constant function, for any two points in the interval, a change in x results in a zero change in f (x). This function works also with class constants . x ( From the general formula, the output of a constant function regardless of its input value (usually denoted by x), will always be the same which is the fixed number \color{red}a. It is of the format y=c d. This is constant function as output is different for each input e. This is constant function as output is different for each input 2. which of the graph represent constant function? "Piecewise constant" should not be used as a precise term without giving an explicit explanation of what you mean. *Response times vary by subject and question complexity. The composition of f with any other function is also a constant function… The derivative is the slope at an instant (kind’ve). In mathematics, a constant function is a function whose (output) value is the same for every input value. The object called by these functions cannot be modified. The limit of a constant function (according to the Properties of Limits) is equal to the constant. Then the gradient of z will be a vector pointing “uphill” on a surface, and the length of the vector is the slope at that point. The idea of const functions is not to allow them to modify the object on which they are called. The main difference between Static and Constant function in C++ is that the Static function allows calling functions using the
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
Static and Constant function in C++ is that the Static function allows calling functions using the class, without using the object, while the Constant function does not allow modifying objects.. C++ is a programing language developed by Bjarne Stroustrup in 1979. As a function requires that inputs produce outputs, it wouldn’t be a “function”. 3 That is when parameters are evaluated and generate statements are expanded. range A constant function has the form. This is a function of the type $$f (x) = k$$, where $$k$$ is any real number. Constant member function in C++. The derivative of a constant function is (1) However, the fundamental period of a constant function is not defined for the above reason. constant function; Background Tutorials. Technically, zero is a constant. This is constant function as output is same for each input. The mathematical formula for a constant function is just f x = a, where a is a number (which does not depend on x). x For example the function f (x) = 4 is constant since f maps any value to 4. A constant function is a linear function whose range contains only one element irrespective of the number of elements of the domain. is used. That is, the output value of the function at any input value in its domain is the same, independent of the input. linear function Varsity Tutors does not have affiliation with universities mentioned on its website. domain y = x is called the identity function because the value of y is identical with that of x. Constant Function Years 1 - 8 : Summary Children are always fascinated by the Constant Function of a calculator which constantly repeats the last operation it was asked to do if the [=] button is constantly pressed. A constant may be used to define a constant function that ignores its arguments and always gives the same value. 2 and constant function; Background Tutorials. This way, for instance, if we wanted to represent a quantity that stays constant over the course of time t, we would use a constant
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
to represent a quantity that stays constant over the course of time t, we would use a constant function f(t)=k, in which the variable tdoes not appear. Need help with a homework or test question? Defining in terms of mathematics, we can say that a constant function is a function that has the same range for all values of its domain. Definition and Usage. The derivative of a constant function is zero. Collection Functions (Arrays or Objects) each_.each(list, iteratee, [context]) Alias: forEach Iterates over a list of elements, yielding each in turn to an iteratee function. This is a constant function and so any value of x that we plug into the function will yield a value of 8. ) ) c is a constant: a number that doesn’t change as x changes. What's a Function? Slope is the change in y over the change in x. constant() is useful if you need to retrieve the value of a constant, but do not know its name. The function graph of a one-dimensional constant function is a straight line. Note that the value of f(x) is always k, independently of the value of x. A fixed value. In haskell: newtype Const r a = Const { unConst :: r } instance Functor (Const r) where fmap _ (Const r) = Const r It maps every type a to r in a sense, and every function of type a -> b to the identity function on r. We can write this type of function as: f (x) = c ( Basically, a constant function is simply equal to a constant. Suppose the failure function is a positive constant; that is, Show that the reliability function R(t) satisfies the separable differential equation Solve this differential equation to find R(t). The constant () function returns the value of a constant. A constant function is a function whose range consists of a single element. = No - the term "time constant" is not restricted to an asymptotic step response. With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. For instance, a school dining room where every child was given one donut,
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
an expert in the field. For instance, a school dining room where every child was given one donut, irrespective of age, or an exam in which every student was given an A regardless of how hard they worked. The graph of a constant functio… It passes through the point (0, c), (1, c), and (-1, c). ) A constant function is a linear function for which the range does not change no matter which member of the domain is used. But 0/6 = 0, so this would not actually produce a graph. I.e. 2 The constant() function returns the value of a constant. In fact lines are either increasing, decreasing, or constant. Example: in "x + 5 = 9", 5 and 9 are constants. horizontal line It’s easier to visualize this for a 2-dimensional function. So a method int Foo::Bar(int random_arg) (without the const at the end) results in a function like int Foo_Bar(Foo* this, int random_arg) , and a call such as Foo f; f.Bar(4) will internally correspond to something like Foo f; Foo_Bar(&f, 4) . *See complete details for Better Score Guarantee. A: The sale is a trade where the money is traded for a product or service. The equation of a line is: y = mx + b. Quadratic functions aren’t much different. for which the A constant function is a linear function for which the range does not change no matter which member of the domain is used. More formally, a function f : A → B is a constant function if f (x) = f for all x and y in A. Do It Faster, Learn It Better. Constant function In mathematics, a constant function is a function whose values do not vary and thus are constant. In other words, it’s just number on its own. f For example, the integral of f(x) = 10 is 10x. Note: This function also works with class constants. For example, the graph of the constant function f(x) = 4 is a straight, horizontal line that passes through the points (2,4), (0,4), and (-2,4). f One great example of a constant function is the log2 function. Recognizing a constant function, like y = 3. x f (x 1) = f (x 2) for any x 1 and x 2 in
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
function. Recognizing a constant function, like y = 3. x f (x 1) = f (x 2) for any x 1 and x 2 in the domain. However, the fundamental period of a constant function is not defined for the above reason. The integration constant Ti is the time constant of the integrator. Real Functions: Constant Functions An constant function is a function that always returns the same constant value. The constant function used to provide a way to define a very simple price. Consider the function f (x) = 3 f (x) = 3 There is no variable in the definition (on the right side). This is a function of the type f(x)=k, where k is any real number. The slope m tells us if the function is increasing, decreasing or constant: is a This function may seem a little tricky at first but is actually the easiest one in this set of examples. Your email address will not be published. Function Definitions and Function Notation. This section describes what is a constant, define() function defines a constant name to a value, constant value can retrieved by the name directly or by the constant() function, any string can be used as constant name. It is recommended to use const keyword so that accidental changes to object are avoided. In the context where it is defined, the derivative of a function measures the rate of change of function (output) values with respect to change in input values. Yes, a constant function is a periodic function with any T∈R as its period (as f(x)=f(x+T) always for howsoever small 'T' you can find). If the answer in infinity, you can type in INF in the blank. x As of 4/27/18. The constant functions cut through the vertical axis in the value of the constant and they are parallel to the horizontal axis (and therefore they do not cut through it). What's a Function? A: The sale is a trade where the money is traded for a product or service. 1 it is stored in a variable or returned by a function. Because a constant function does not change, its derivative is 0. This graph is shown below.
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
Because a constant function does not change, its derivative is 0. This graph is shown below. The iteratee is bound to the context object, if one is passed. A constant … In other words, the constant function is the function f(x) = c. An example of data for the constant function expressed in tabular form is presented below:  What Is a Constant? The following shows the graph of f(x) = 10, and the integral f(x) = 10x. This function works also with class constants. This tutorial shows you a great approach to thinking about functions! This video is provided by the Learning Assistance Center of Howard Community College. The coördinate pairs are (x, x). ∟ Constant and define() Function. Your email address will not be published. C++ is similar to C but has more features than C. Therefore, it is called a subset of C languag C++ Programming Server Side Programming The const member functions are the functions which are declared as constant in the program. Learn the definition of a function and see the different ways functions can be represented. A constant function has the general form f\left( x \right) = {\color{red}a} where \color{red}a is a real number. The general transfer function of an integrator is (using your notation) H(s)=k/s=1/(s/k). ( In layman's terms, constant functions are functions that do not move. For example, the following are all constant functions: Since f(x) is equal to a constant, the value of f(x) will always be the same no matter what the value of x might be. The e constant is defined as the limit: The e constant is defined as the infinite series: Properties of e Reciprocal of e. The reciprocal of e is the limit: Derivatives of e. The derivative of the exponential function is the exponential function: (e x)' = e x. = Names of standardized tests are owned by the trademark holders and are not affiliated with Varsity Tutors LLC. You can't go through algebra without learning about functions. . It is of the format y=c d. This is constant function as output
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
without learning about functions. . It is of the format y=c d. This is constant function as output is different for each input e. This is constant function as output is different for each input 2. which of the graph represent constant function? Constant function. A Constant Function is a horizontal line: Lines. . Learn the definition of a function and see the different ways functions … To understand why the slope of a constant is zero, we have to understand what slope means. Media outlet trademarks are owned by the respective media outlets and are not affiliated with Varsity Tutors. A constant function is an even function, i.e. Mathematically speaking, a constant function is a function that has the same output value no matter what your input value is. In Algebra, a constant is a number on its own, or sometimes a letter such as a, b or c to stand for a fixed number. x). The main difference between Static and Constant function in C++ is that the Static function allows calling functions using the class, without using the object, while the Constant function does not allow modifying objects.. C++ is a programing language developed by Bjarne Stroustrup in 1979. Maplesoft Support. the graph of a constant function is symmetric with respect to the y-axis. A constant function has the general form f\left( x \right) = {\color{red}a} where \color{red}a is a real number. Now, it is common usage to set 1/k=Ti resulting in H(s)=1/sTi. The blue square represents the integral when evaluated from 0 to 1. constant() is useful if you need to retrieve the value of a constant, but do not know its name. Every function with a derivative equal to zero is a constant function. without a variable attached) is “c”, so that is the constant term. You can't go through algebra without learning about functions. A constant function is an even function so the y-axis is an axis of symmetry for every constant function. Varsity Tutors connects learners with experts. constant function; Background
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
constant function. Varsity Tutors connects learners with experts. constant function; Background Tutorials. From the general formula, the output of a constant function regardless of its input value (usually denoted by x), will always be the same which is the fixed number \color{red}a. Generally, it is a function which always has the same value no matter what the input is. Median response time is 34 minutes and may be longer for new subjects. We can make use of this interest to encourage a little thinking. This is a constant function and so any value of x that we plug into the function will yield a value of 8. You can't go through algebra without learning about functions. A function whose value remains unchanged (i.e., a constant function). In x + 5 = 9 '', select control: ( 1 ) this would not actually a! Examples of constant function and see the different ways functions can not be assigned later to it, and -1. A one-dimensional constant function is a straight line keyword so that accidental changes to object are avoided to. Inputs produce outputs, it is common Usage to set 1/k=Ti resulting in H ( s ).... But is actually the easiest one in this set of examples in layman 's terms, constant are., then the limit of a function requires that inputs produce outputs, it recommended. Function graph of this interest to encourage a little tricky at first but is actually the easiest one this... Is zero, we have to understand why the slope of a constant function just! Example the function ’ s on its own of what you mean 1/k=Ti in! Can be characterized with respect to function composition in two ways arrive at this.. And so any value to 4 substitution to arrive at this conclusion elaboration time the. And ( -1, c ), and the integral of f x. Will always generate an output equal to zero is a constant function is constant. A subset of c language are the functions which are declared as constant in the construction of antiderivatives is the. And question complexity, you can get
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
as constant in the construction of antiderivatives is the. And question complexity, you can get step-by-step solutions to your questions an! Is any real number we can make use of this type of function is equal to,! In INF in what is constant function program function graph of f ( x ) is “ c ” so... That do not move or equation contains no variables visualize this for a product or service form y =,. Or service Varsity Tutors, a constant is any real number to allow them modify...: functions and Relations.Retrieved from https: //www.calculushowto.com/constant-function/ for example, y = x is what is constant function!, y ) as your function great example of a constant that changes! Video is provided by the respective media outlets and are not affiliated with Varsity Tutors does not,! Function returns the same for every constant function is a function that ignores its arguments and always the! Other words, it is common Usage what is constant function set 1/k=Ti resulting in H s... Their own style, methods and materials are evaluated and generate statements are expanded not defined the. To allow them to modify the object called by these functions can be represented what is constant function! It is recommended the practice to make as many functions const as possible so that accidental changes to object avoided. So the y-axis should not be assigned later to it the trademark holders and are not with... Inherent in the construction of antiderivatives raised to the power zero bound to the power zero according to the of... One is passed practice to make as many functions const as possible so that changes. At elaboration time a derivative equal to 3, no matter what your input value in domain. Iteratee is bound to the context object, if one is passed that returns. Understand what slope means tailor their services to each client, using their style. Its arguments and always gives the same constant value, independently of the domain of Limits is! Constant, but do not move called a
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
same constant value, independently of the domain of Limits is! Constant, but do not move called a subset of c language vary., x ) = f ( x ) = c, where c a! Real number, 1.2 and pi ( π = 3.14… ) c ), new! As possible so that is when parameters are evaluated and generate statements are expanded constant member is! Help: Math Apps: functions and Relations.Retrieved from https: //www.maplesoft.com/support/help/Maple/view.aspx? path=MathApps 2FConstantFunction... Use const keyword so that accidental changes to objects are avoided through point. Integral f ( x ) = 2x2 + 3 ( the constant in! Limit of a constant function is equal to 3, no matter what your input value its... Algebra without learning about functions its domain is used with that of x member function is a line... Affiliated with Varsity Tutors LLC one great example of a constant function is not defined for the reason... Mentioned on its website to each client, what is constant function their own style, methods and.. X that we plug into the function graph of f ( x ) = 10 10x. Such a constant function is a function which can not modifying values of data members a... Practically Cheating Calculus Handbook, https: //www.maplesoft.com/support/help/Maple/view.aspx? path=MathApps % 2FConstantFunction on 24! K, independently of the type f ( x ) = 2x2 3... Us if the answer in infinity, you can type in INF in the domain identical. Outlets and are not affiliated with Varsity Tutors 3 ) modify the object which... Not affiliated with Varsity Tutors of what you mean Calculus Handbook, https: //www.maplesoft.com/support/help/Maple/view.aspx? %! + B notation ) H ( s ) =k/s=1/ ( s/k ) function in. Defined for the above reason dimensional plane, the Practically Cheating Calculus Handbook, https: //www.calculushowto.com/constant-function/ in! Just zero there are three constants in this set of examples slope of a constant function is. Definition of a line is: y = c, where c is a constant is zero, have... For a 2-dimensional
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
is. Definition of a line is: y = c, where c is a constant is zero, have... For a 2-dimensional function shows you a great approach to thinking about functions k is any number!
{ "domain": "paulcupido.nl", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787853562028, "lm_q1q2_score": 0.8559193116136021, "lm_q2_score": 0.8670357632379241, "openwebmath_perplexity": 447.6038189694047, "openwebmath_score": 0.7129285931587219, "tags": null, "url": "https://www.paulcupido.nl/l3zg73/11ba1e-what-is-constant-function" }
# How fast do Riemann sums converge for Lipschitz-continuous functions? I'm in the following situation: Let's say I've got a non-negative function $f$ that's globally Lipschitz-continuous on some interval $[a,b]$ for some constant $K$. I'm throwing a dart uniformly on the area below $f$ on that interval. Now consider the sequence of lower Riemann sums that results from bisecting all the intervals in the previous partition at each step. The dart will eventually be covered by the area corresponding to one of these lower Riemann sums (and all the following). Then define the random variable $X$ to be the index of the first such sum. My question is: Can the expected value of $X$ somehow be bounded using $K$? Intuitively, since $f$ is Lipschitz, the Riemann sums exhaust the area below $f$ very quickly, so this expectation should be quite small. But I'm having a lot of trouble quantifying this intuition. • I am confused about the setup: If you partition an interval, the dart will always be a part of any partition. A partition is just an ordered sequence on $[a,b]$. What does it mean for the dart to be be "eventually" covered? Mar 29 '18 at 2:54 • The dart is an element of $\mathbb{R}^2$. The vertical columns corresponding to some partition cover some area of $\mathbb{R}^2$. Eventually these columns will cover the dart. Mar 29 '18 at 3:04 Let $m,M$ be the points in $[a,b]$ where $f$ attains its minimum and maximum. Since $f$ is $K$-Lipschitz, $$f(M)-f(m) \leq K|M-m| \leq K(b-a)$$ But $\int_a^b f(x) \,dx \leq (b-a)f(M)$ and so this yields $$\int_a^b f(x) \,dx - (b-a)f(m) \leq K(b-a)^2 \tag{*}$$ If we subdivide $[a,b]$ into $2^n$ equal-length subintervals, apply this inequality to each of them, and then sum, we get
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787872422175, "lm_q1q2_score": 0.8559193098571415, "lm_q2_score": 0.8670357598021707, "openwebmath_perplexity": 227.15929296961895, "openwebmath_score": 0.9977597594261169, "tags": null, "url": "https://math.stackexchange.com/questions/2712668/how-fast-do-riemann-sums-converge-for-lipschitz-continuous-functions" }
$$\int_a^b f(x)\, dx - L_{2^n}(f) \leq \frac{K}{2^n}(b-a)^2$$ where $L_{2^n}(f)$ is the lower Riemann sum corresponding to this subdivision. So $$1 - \frac{L_{2^n}(f)}{\int_a^b f(x) \,dx} \leq \frac{K(b-a)^2}{2^n\int_a^b f(x) \, dx}$$ That is, the probability that the dart remains uncovered after the $n$th subdivision is at most $\frac{K(b-a)^2}{2^n\int_a^b f(x) \, dx}$. So, if $P_n$ is the probability that the dart is uncovered after $n$ subdivisions, we have $$E[X]=\sum_{n=0}^\infty n(P_{n-1} - P_n) =\sum_{n=0}^\infty P_n \leq \frac{K(b-a)^2}{\int_a^b f(x) \, dx}\sum_{n=0}^\infty \frac{1}{2^n}=\frac{2K(b-a)^2}{\int_a^b f(x) \, dx}$$ In fact we can improve on the key inequality $(*)$ with a little more work. We have $$f(x) - f(m) \leq K |x-m|$$ for all $x \in [a, b]$. Integrating over $[a,b]$ gives \begin{align} \int_a^b [f(x) - f(m)]\, dx &\leq K \int_a^b |x-m|\, dx \\ &= \frac{K}{2}[(m-a)^2 + (b-m)^2] \\ &= \frac{K}{2}[(a-b)^2-2(m-a)(b-m)] \\ &\leq \frac{K}{2}(a-b)^2 \end{align} and so we have $$\int_a^b f(x) \, dx - (b-a)f(m) \leq \frac{K}{2} (a-b)^2 \tag{**}$$ a factor-of-$2$ improvement over $(*)$. Moreover, if $f$ is linear with slope $\pm K$, then equality is attained, so this bound is tight. Carrying this through the rest of the argument we end up with $$E[X] \leq \frac{K(b-a)^2}{\int_a^b f(x) \, dx}$$ with equality when $f$ is linear with slope $\pm K$. • Wow, this came out quite elegantly. The way you bounded the difference between the integral and the Riemann sums was I think a key step. Is this an argument that you have seen used elsewhere before? Mar 29 '18 at 4:27 • Sorry, I don't have any specific references. I think some version of "find an easy bound, then work out what happens to it as you subdivide" is present in most calculations involving Riemann sums (at least, most of the ones I'm capable of coming up with...) Mar 29 '18 at 4:48
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787872422175, "lm_q1q2_score": 0.8559193098571415, "lm_q2_score": 0.8670357598021707, "openwebmath_perplexity": 227.15929296961895, "openwebmath_score": 0.9977597594261169, "tags": null, "url": "https://math.stackexchange.com/questions/2712668/how-fast-do-riemann-sums-converge-for-lipschitz-continuous-functions" }
# In combinatorics, how can one verify that one has counted correctly? This is a soft question, but I've tried to be specific about my concerns. When studying basic combinatorics, I was struck by the fact that it seems hard to verify if one has counted correctly. It's easiest to explain with an example, so I'll give one (it's fun!) and then pose my questions at the bottom. In this example there are two ways to count the number of ways $2n$ people can be placed into $n$ partnerships (from the book Introduction to Probability by Blitzstein and Hwang): $$\frac{(2n)!}{2^n \cdot n!} = (2n - 1)(2n - 3) \cdots 3 \cdot 1$$ Story proof for the right side: For the first person you have $(2n - 1)$ choices for partner. Then for the next person you have $(2n - 3)$ choices, etc. Story proof for the left side: Line all $2n$ people up, walk down the line, and select every 2 people as a pair. The ordering you chose for the line determines the pairs, and there are $2n!$ ways to order $2n$ people. But divide by $2^n$ because the order within each pairing doesn't matter. Also divide by $n!$ because the order of the pairings doesn't matter. My question is: What if I had been tasked with getting this number at work, and I chose the approach on the left side, but I neglected to divide by $2^n$ and only divided by $n!$? I'd be wrong of course, but how could I have known? I suppose one answer to my question could just be "Try hard, try to think of ways you might be over/undercounting, look at other examples, and continue to study theory." But the prospect of not knowing if I'm wrong makes me nervous. So I'm looking for more concrete things I can do. So, three questions of increasing rigor:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
• What are specific "habits of mind" that people in combinatorics use to avoid error? • What are specific validation techniques that such people use to check their work? (One does come to mind from my example: Calculate the same thing two ways and confirm equality) • Is there any formal list of questions that, if answered, should verify that one's approach is correct? • Checking your answer for small $n$ (in cases where you can count manually) is a good sanity check. – angryavian Jun 28 '17 at 23:45 • In 1962 in my sophomore Intro to Probability, my prof asked how many legs a cow had. Twelve, he explained: two in front, two in back, two on the left, two on the right and one at each corner. We did not think it was funny. A few weeks later he told the same joke and everyone laughed. – John Wayland Bales Jun 28 '17 at 23:46 • Often, you can't be sure. And that's one reason I dislike combinatorics personally - I'm never sure if I've massively screwed up. :) – Deepak Jun 29 '17 at 0:06 • I think it was in the preface to the 3rd edition of "Generatingfunctionology" where I read the most(?) bijective (read counting) proofs we know were first discovered from messing around with generating functions in the first place. And apparently "A = B" by Petkovsek (and others) provides a(the?) full-proof method/s of solving most any textbook problem of the kind you mentioned. Haven't read either yet, but google them and see for yourself. – user448692 Jun 29 '17 at 1:20 • @Stephen If you divide that by $5!$: $$\frac{ {10 \choose 2}{8 \choose 2}{6 \choose 2}{4 \choose 2}{2 \choose 2}}{5!}$$ you get the right answer. – Trevor Gunn Jun 29 '17 at 19:24 Here are some methods I have used. Some of these have already been suggested in the comments, and none of them are fool-proof.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
1. Work out a few small cases by hand, i.e. generate all the possibilities, and verify that your formula works in those cases. (Often working out the small cases will also suggest a method of solution.) 2. Solve the problem by two different methods. For example, often a problem which is solved by the principle of inclusion/exclusion can also be solved by generating functions. Or it may be that you can derive a recurrence that the solution must satisfy, and verify that your solution satisfies the recurrence; if it is too hard to do the general case, verify the recurrence is satisfied for a few small cases. 3. Write a computer program that solves a particular case of the problem by "brute force", i.e. enumerates all the possibilities, and check the count against your analytic answer. If you don't yet know how to program, this is one of many reasons why it's good to to learn how. For example, how many ways can you make change for a dollar? It's probably easier to write a program that generates all the ways than to solve this problem analytically. 4. Sometimes a combinatoric problem can be reinterpreted as a probability problem. For example, the problem of counting all the poker hands that form a full house is closely connected to the probability of drawing a full house. In this case you can check your answer by Monte Carlo simulation: use a random number generator to simulate many computer hands and count the number which are full houses. Once again, this requires computer programming skills. You won't get an exact match against your analytic answer, but the proportion of full houses should be close to your analytic result. Here it helps to know some basic statistics, e.g. the Student's t test, so you can check to see if your answer falls in the range of plausible results.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
• Being a programmer, I think, "of course my hammer works on this nail", but intellectually I rebel against the idea that the best way to check my theory is to use a machine that didn't exist when the study of combinatorics was born... – Kristen Hammack Jun 29 '17 at 12:55 • @KristenHammack - Why would that be a problem? Hammers and nails didn't exist when someone first had to figure out how to hold two pieces of wood together. But in most situations they are a superior solution to tying the wood together with braided grass or strips of leather. – Paul Sinclair Jun 29 '17 at 16:49 • @PaulSinclair that's certainly an interesting way of looking at it. But I was wondering if maybe there was a more elegant "screwdriver" type of solution, with the brute force programming method being more like "let me hammer this screw into the wood", and a formal proof being "here's the perfectly-fitting screwdriver for this particular screw". I'm used to seeing that in CS. – Kristen Hammack Jun 29 '17 at 17:04 • Plus a programming check probably won't give much extra intuition about why a solution is correct or incorrect. I am also a programmer and wouldn't hesitate to write a program to do the check if I needed it, but it wouldn't be my first choice. – Stephen Jun 29 '17 at 21:42 • @KristenHammack But the question is not about "how do you figure this out". It is about "I think I have it solved, but I want to double-check that I haven't missed something". For this, "experimental mathematics" is and always has been a tool-of-choice. And computers vastly increase the amount of experimentation that is feasible. And for Stephen's comment, even if you are still trying to figure it out, having a good sample of actual solutions can be a godsend for better understanding the behavior. Computer proofs may be boring, but they are excellent mathematical telescopes. – Paul Sinclair Jun 29 '17 at 23:10
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
Here are some other methods: • Estimate what the answer should be. For example, Let's say I want to count all 3-element subsets of $1,\dots,n$ where the sum of any two elements is not equal to the third. I think to myself "hmm, if $n$ is large then most three element subsets should be valid." So whatever the answer is, I know that it should be close to $\binom{n}{3}$ if $n \gg 0$. • Also keep in mind that your intuition about how large the answer should be might be off. This is ok! It means you will put more thought into the problem. • Have someone look over your work. This can work really well because sometimes you'll have something you are mistaken about in the back of your mind when you are writing. When this happens, what you write will (hopefully) look funny to other people whereas it might take a lot of thought on your part to catch that something is off. • Be more rigorous. This is easier said than done, obviously. For example if you are applying a theorem, look to make sure that you satisfy all the necessary hypotheses to be able to apply it. • Slow down and think about "why am I multiplying here?" or dividing, or differentiating. Make sure the algebra makes sense combinatorially. If you are dividing, make sure you end up with integers where there should be integers. • Having somebody look over work is a very good suggestion. Fortunately I am in a programming context where code review is de rigeur – Stephen Jun 29 '17 at 1:34 It is actually possible to be rigorous in questions like this – though it can be a lot of work. The first step is to build a mathematical model that corresponds to the question. This part deserves some thought, and there is scope for error, but usually just expressing the question mathematically is easier to do without error than finding the actual answer.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
For this example, we might say that a partnering of a group of $2n$ people (whom we'll refer to as $G=\left\{0, 1, \ldots, 2n-1\right\}$) is any function $p: G \rightarrow G$ such that: • For all $m$: $p(p(m)) = m$ (i.e. a person's partner is paired with the person) • For all $m$: $p(m) \neq m$ (i.e. nobody is partnered with themself) Hopefully it's fairly clear that the above corresponds to the definition of a partnering from the question - if not, you can do it in a way that's clearer to you. Having made that definition, the question becomes: how many such functions $p$ are there? One way to establish the answer rigorously is to come up with a bijection $B$ between the set of partnerings $P = \{p: p \,\hbox{is a partnering of$2n$people}\}$ and an initial subset of the natural numbers $\mathbb N$. That can be done as follows – I'll omit a lot of details for brevity, but hopefully it should be clear that this can be made fully rigorous. Given a partnering $p$: Start with a set $U_0=G$ of unpartnered people, and a value $v_0=0$. At each step $k$, starting from 0: • Consider the person from $U_k$ with the smallest number – call them $g_k$. • Find out who they're partnered with – that is, evaluate $p(g_k)$ - and find that person's "rank" in $U_k\setminus g_k$ starting from 0. That is, if (of all people other than $g_k$ who are so far unpartnered) $p(g_k)$ has the lowest number, call them rank 0, and if there are three with numbers lower than $p(g_k)$ that are unpartnered, call them rank 3, and so on. Call that rank $r_k$. • Set $U_{k+1} = U_k \setminus \{g_k, p(g_k)\}$ and $v_{k+1} = v_k\left|U_k-1\right| + r_k$. That is, take out the two people we just partnered, and calculate a new value from the previous step's value and the rank of the person we just partnered with $g_k$. Repeat for $k$ from 0 to $n-2$, and take $v_{n-1}$ as our result for $B(p)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
Repeat for $k$ from 0 to $n-2$, and take $v_{n-1}$ as our result for $B(p)$. It's clear that, given any partnering $p$, this procedure gives us back a number $B(p)$. What may not yet be clear is that, given $B(p)$, we can determine what $p$ was – that is, $B$ is a bijection. I won't go into the details of that, but note that, when calculating the value, the person who gets partnered with person 0 has their rank multiplied by $(2n-3)(2n-5)\ldots1$, and we can get it back by dividing $B(p)$ by that number – the integer part of the answer gives us $p(0)$, and then we can use the remainder in a similar way to deduce the rest of the pairings in $p$. So (again I'm leaving out some proof details) we have a function that is a bijection between $\left\{0,1,2,\ldots,m-1\right\}$ and the set of possible partnerings of $2n$ people. This is enough to show that there are $m$ such partnerings – and it's reasonably easy to see (by taking the maximum possible value at each step in the procedure above) that the value of $m$ is exactly $(2n-1)(2n-3)\cdots 1$ as you already knew.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
• It's good to be able to do things with symbols like this; it's seldom necessary. Excellent mathematics uses symbols to convey an intuitive understanding. Mediocre mathematics uses symbols in place of intuitive understanding. Poor mathematics uses symbols to disguise the total absence of intuitive understanding. – Wildcard Jun 29 '17 at 19:37 • @Wildcard: I don't think anyone said it was necessary! The point is that it is possible to achieve a high degree of rigour in combinatorics. The OP was asking about how to be more sure about one's answers; symbolic reasoning is a powerful tool that can help with that. – psmears Jun 29 '17 at 20:08 • psmears upvoted for your efforts. I didn't quite understand it, but I will come back to it once my math is a bit stronger (specifically once I understand bijections). Also @Wildcard that is quite funny. – Stephen Jun 29 '17 at 23:52 • @Stephen:Sorry--it's hard to know what level to go for! A "bijection" is a fancy name for a function that is invertible: the basic idea above is to find a way of giving each possible partnering a unique number, such that given a partnering you can find its number, and given a number you can find the corresponding partnering. Once that's done, if the partnerings are numbered 0 to $m-1$, you know there are $m$ partnerings in total. The advantage is everything (after the initial definition) can be 100% rigorous--but that's a lot of work (my long answer omits many details), hence it's rarely done! – psmears Jun 30 '17 at 9:38 • This is an enlightening answer. We go to a lot of trouble to develop other subjects rigorousIy, we should know how to do it for combinatorics as well. I especially like your emphasis on the modeling step, which I think is often glossed over in combinatorics books. – littleO Jul 6 '17 at 9:24
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668690081642, "lm_q1q2_score": 0.8559183536972395, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 306.75648894982425, "openwebmath_score": 0.7309669256210327, "tags": null, "url": "https://math.stackexchange.com/questions/2340044/in-combinatorics-how-can-one-verify-that-one-has-counted-correctly" }
# How to solve simple systems of differential equations Say we are given a system of differential equations $$\left[ \begin{array}{c} x' \\ y' \end{array} \right] = A\begin{bmatrix} x \\ y \end{bmatrix}$$ Where $A$ is a $2\times 2$ matrix. How can I in general solve the system, and secondly sketch a solution $\left(x(t), y(t) \right)$, in the $(x,y)$-plane? For example, let's say $$\left[ \begin{array}{c} x' \\ y' \end{array} \right] = \begin{bmatrix} 2 & -4 \\ -1 & 2 \end{bmatrix} \left[ \begin{array}{c} x \\ y \end{array} \right]$$ Secondly I would like to know how you can draw a phane plane? I can imagine something like setting $c_1 = 0$ or $c_2=0$, but I'm not sure how to proceed. - Please do not modify substantially the question after several answers are posted. –  Did Jan 12 '13 at 19:42 ? I did not, because I asked it already, but nobody noted it –  MSKfdaswplwq Jan 13 '13 at 14:10 This is not true: the whole second part of your question appeared after Wooster's answer and mine were posted. Once again: please do not do that. –  Did Jan 13 '13 at 15:23 If you don't want change variables then, there is a simple way for calculate $e^A$(all cases). Let me explain about. Let A be a matrix and $p(\lambda)=\lambda^2-traço(A)\lambda+det(A)$ the characteristic polynomial. We have 2 cases: $1$) $p$ has two distinct roots $2$) $p$ has one root with multiplicity 2 The case 2 is more simple: In this case we have $p(\lambda)=(\lambda-a)^2$. By Cayley-Hamilton follow that $p(A)=(A-aI)^2=0$. Now develop $e^x$ in taylor series around the $a$ $$e^x=e^a+e^a(x-a)+e^a\frac{(x-a)^2}{2!}+...$$ Therefore $$e^A=e^aI+e^a(A-aI)$$ Note that $(A-aI)^2=0$ $\implies$ $(A-aI)^n=0$ for all $n>2$ Case $1$: Let A be your example. The eigenvalues are $0$ and $4$. Now we choose a polynomial $f$ of degree $\le1$ such that $e^0=f(0)$ and $e^4=f(4)$( there is only one). In other words what we want is a function $f(x)=cx+d$ such that $$1=d$$ $$e^4=c4+d$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668695588648, "lm_q1q2_score": 0.8559183525495377, "lm_q2_score": 0.8723473846343394, "openwebmath_perplexity": 218.95279179963651, "openwebmath_score": 0.9321116209030151, "tags": null, "url": "http://math.stackexchange.com/questions/276264/how-to-solve-simple-systems-of-differential-equations" }
$$1=d$$ $$e^4=c4+d$$ Solving this system we have $c=\dfrac{e^4-1}{4}$ and $d=1$. I say that $$e^A=f(A)=cA+dI=\dfrac{e^4-1}{4}A+I$$ In general if $\lambda_1$ and $\lambda_2$ are the distinct eigenvalue, and $f(x)=cx+d$ satisfies $f(\lambda_1)=e^{\lambda_1}$ and $f(\lambda_2)=e^{\lambda_2}$, then $$e^A=f(A)$$ If you are interested so I can explain more (it is not hard to see why this is true) Now I will solve your equation using above. What we need is $e^{tA}$ The eigenvalues of $tA$ is $0$ and $4t$. Then $e^{tA}=\dfrac{e^{4t}-1}{4t}A+I$ for $t$ distinct of $0$ - @JoyeuseSaintValentin If you are interested why is true case 2, I can explain more (it is not hard to see why this is true) –  user52188 Jan 12 '13 at 19:39 Quite generally, $$\left[ \begin{array}{c} x(t) \\ y(t) \end{array} \right] = \mathrm e^{tA}\begin{bmatrix} x(0) \\ y(0) \end{bmatrix},$$ where, by definition, $$\mathrm e^{tA}=\sum_{n=0}^{+\infty}\frac{t^n}{n!}A^n.$$ To compute $\mathrm e^{tA}$ in the case at hand, note that $A^2=4A$, hence $$\mathrm e^{tA}=I+\sum_{n=1}^{+\infty}\frac{t^n}{n!}4^{n-1}A=I+\frac{\mathrm e^{4t}-1}4A.$$ Hence, $$x(t)=\frac{\mathrm e^{4t}+1}2x(0)+(1-\mathrm e^{4t})y(0),$$ and $$y(t)=\frac{1-\mathrm e^{4t}}4x(0)+\frac{\mathrm e^{4t}+1}2y(0).$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668695588648, "lm_q1q2_score": 0.8559183525495377, "lm_q2_score": 0.8723473846343394, "openwebmath_perplexity": 218.95279179963651, "openwebmath_score": 0.9321116209030151, "tags": null, "url": "http://math.stackexchange.com/questions/276264/how-to-solve-simple-systems-of-differential-equations" }
- I know the definition of $e^{tA}$ as a series. I also understand that $$\mathrm e^{tA}=I+\sum_{n=1}^{+\infty}\frac{t^n}{n!}4^{n-1}A$$ But can you explain the last equality: $$I+\sum_{n=1}^{+\infty}\frac{t^n}{n!}4^{n-1}A=I+\frac{\mathrm e^{4t}-1}4A.$$ –  MSKfdaswplwq Jan 12 '13 at 12:54 Am I mistaken or does your second comment answer the question in your first comment? –  Did Jan 12 '13 at 12:55 No, I just messed it up. My question is why the last equality is true –  MSKfdaswplwq Jan 12 '13 at 12:57 Because $\sum\limits_{n=1}^{+\infty}\frac{t^n}{n!}4^{n-1}=\frac{1}{4}\sum\limits_{n=1}^{‌​+\infty}\frac{(4t)^n}{n!}$ and the second series is almost the expansion of $\mathrm e^{4t}$ (almost because the $n=0$ term is lacking). –  Did Jan 12 '13 at 13:02 Eigenvalues are always in the background, one way or another. For example, here $A^2=4A$ because $\chi_A(x)=x^2-4x$ and $\chi_A(A)=0$ by Cayley-Hamilton. –  Did Jan 12 '13 at 15:29 $\newcommand{\vect}[1]{\mathbb{#1}}$Try finding out about diagonalisation of matrices. (If you do not already know about this.) The basic idea is that I can find a particular matrix P, and a diagonal matrix $\Lambda$; these combine in such a way that $A = P \Lambda P^{-1}$. (These matrices relate to the eigenvectors and eigenvalues of your matrix $A$.) The way we use diagonalisation is as follows. Let me redefine the question slightly so that it is easier for me to explain: I shall use the differential equation $$\begin{bmatrix}x_1'\\x_2'\end{bmatrix} = A \begin{bmatrix}x_1\\x_2\end{bmatrix}.$$ Let me call the vector of functions $\vect{x}$: then I can write our equation as $$\vect{x}' = A \vect{x}.$$ Replacing $A$ with my expression $A = P\Lambda P^{-1}$, I have $$\vect{x}' = P \Lambda P^{-1} \vect{x},$$ or $$P^{-1} \vect{x}' = \Lambda P^{-1} \vect{x}.$$ But the matrix $P$ is just a constant matrix, so if I were to define $\vect{y} = P^{-1} \vect{x}$, then we would simply have $$\vect{y}' = \Lambda \vect{y}.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668695588648, "lm_q1q2_score": 0.8559183525495377, "lm_q2_score": 0.8723473846343394, "openwebmath_perplexity": 218.95279179963651, "openwebmath_score": 0.9321116209030151, "tags": null, "url": "http://math.stackexchange.com/questions/276264/how-to-solve-simple-systems-of-differential-equations" }
Big deal, you might think: this is just like the old equation. But remember that $\Lambda$ is a diagonal matrix. So in fact this equation looks like $$\begin{bmatrix}y_1'\\y_2'\end{bmatrix} = \begin{bmatrix}\lambda_1 & 0 \\0 & \lambda_2\end{bmatrix} \begin{bmatrix}y_1\\y_2\end{bmatrix},$$ which is far simpler to solve. I can solve the differential equations for our $y_i$ functions, and then use the equation $$\vect{x} = P \vect{y}$$ to find the solution to our original problem. [EDIT: removed references to $P$ being orthogonal which are incorrect.] - Thank you, that made sense. But actually my matrix is not diagonalizable. Then some Jordan theorem applies I guess? –  MSKfdaswplwq Jan 12 '13 at 12:56 Sorry to interrupt but A is diagonalizable. –  Did Jan 12 '13 at 13:03 Indeed it is. I was just trying to work it out, but it certainly looks diagonalisable. Eigenvalues 0 and 4. I did notice an error in my write-up which said you want an orthogonal matrix $P$; this is not true and I have removed it. –  Wooster Jan 12 '13 at 13:11 I see, but what to do when you cannot diagonalize? –  MSKfdaswplwq Jan 12 '13 at 13:11 If diagonalisation is not possible, then I think you are correct: Jordan normal form would be a good idea. It will give you a number of simple first-order ODEs to solve and then a simple algorithm for solving the rest. –  Wooster Jan 12 '13 at 13:17
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668695588648, "lm_q1q2_score": 0.8559183525495377, "lm_q2_score": 0.8723473846343394, "openwebmath_perplexity": 218.95279179963651, "openwebmath_score": 0.9321116209030151, "tags": null, "url": "http://math.stackexchange.com/questions/276264/how-to-solve-simple-systems-of-differential-equations" }
Properties of the euler totient function Why is it that the euler totient function has the following condition true based on its definition? $$\phi(p^k)=p^{k-1}(p-1) = p^k(1-\frac{1}{p}) = p^k-p^{k-1}$$ A proof would be awesome and an intuition on why this is true would be even better! (to prove it I thought of using multiplicativity of the totient function but that would not work because p is not coprime to itself :( ) A more detailed explanation of the wikipedia article will get a like and accepted answer. To get accepted, giving an explanation on why the number of multiples of p is $p^{k-1}$ will be an important factor. Also, are the multiples of p we are excluding between 1 to $p^k - 1$ or to $p^k$? Some kind of counting argument is necessary to get accepted. • en.wikipedia.org/wiki/… – PITTALUGA Jan 7 '14 at 8:12 • Look at it as $p^k - p^{k-1}$. – ShreevatsaR Jan 7 '14 at 8:23 • If you want I can accept your answers if you provide a little more detailed explanation of $p^k -p^{k-1}$ or the Wikipedia article. Specifically, I am a little unsure about the $p^{k-1}$. Thanks thought, it has been helpful. – Pinocchio Jan 8 '14 at 0:22 • Bounty started to address the details of $p^{k}-p^{k-1}$. – Pinocchio Jan 10 '14 at 17:21 Any positive integer $x$ less than $p^k$ can be written in base $p$ as $$x = a_0 + a_1 p + a_2 p^2 + \cdots + a_{k-1} p^{k-1}$$ where $a_i \in \{0, 1, 2, \ldots, p-1\}$. Then $x$ is not relatively prime to $p^k$ iff $p \mid x$ iff $a_0 = 0$. Thus if we want $x$ to be relatively prime to $p^k$ we have $p-1$ choices for $a_0$ and $p$ choices for each of the other $k-1$ coefficients, hence $\varphi(p^k) = (p-1)p^{k-1}$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668745151689, "lm_q1q2_score": 0.85591835036074, "lm_q2_score": 0.8723473779969194, "openwebmath_perplexity": 243.2411033383979, "openwebmath_score": 0.8835917115211487, "tags": null, "url": "https://math.stackexchange.com/questions/629933/properties-of-the-euler-totient-function/636466" }
• This was such a unexpected and cute answer! kudos on that, I would have never had thought of it that way. – Pinocchio Jan 13 '14 at 5:58 • For future reference, answer provided by user115654 also provides a great reference if this one did not make sense. – Pinocchio Jan 13 '14 at 6:15 • I was learning about the $p$-adic numbers when I first saw this result, so it was very natural at the time! – André 3000 Jan 13 '14 at 6:33 Here is intiution.... Take $p^2$. How many numbers in the range $1 \ldots p^2$ are not co-prime to it? They are precisely $p$, $2p$, $3p$ ... $p^2$. There are exactly $p$ of them. So the number co prime to $p^2$ is $$\phi(p^2) = p^2 -p = p (p-1)$$ You can do the same for $p^3$ to get $$\phi(p^3) = p^3 -p^2 = p^2 (p-1)$$ You can turn this into a proof by induction if you so wish. I find it intuitive to think in terms of "what are the chances of not being coprime to $p^k$?" Once you realize that "coprime to $p^k$" is synonymous with "not divisible by $p$", it suggests the proportion of numbers up to $p^k$ which are coprime to $p^k$ is $1 - \frac1p$, motivating the quantity $p^k(1-\frac1p)$. The same reasoning applies to general $n$ (but takes more effort to justify carefully): "coprime to $n$" is synonymous with "not divisible by any of the prime factors of $n$", and if you can convince yourself that the individual probabilities are independent this suggests $\phi(n) = n\prod_{p\mid n} (1-\frac1p)$. • How do you know that the proportion of elements that are coprime to $p^k$ is $1 - \frac{1}{p}$? – Pinocchio Jan 13 '14 at 6:37 • @Pinocchio Because the proportion that are divisible by $p$ is $\frac1p$. – Erick Wong Jan 14 '14 at 4:50 Another explanation (much of which is implicit in the other answers):
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668745151689, "lm_q1q2_score": 0.85591835036074, "lm_q2_score": 0.8723473779969194, "openwebmath_perplexity": 243.2411033383979, "openwebmath_score": 0.8835917115211487, "tags": null, "url": "https://math.stackexchange.com/questions/629933/properties-of-the-euler-totient-function/636466" }
Another explanation (much of which is implicit in the other answers): A number $n$ is not coprime to $p \iff \text{gcd}(p,n) \neq 1 \iff \text{gcd}(p,n) = p \iff n$ is a multiple of $p$. Now the multiples of $p$ between $1$ and $p^k$ are precisely the numbers $mp$, for $1 \leq m \leq p^{k-1}$, of which there are $p^{k-1}$. Subtracting these from the total gives $p^k - p^{k-1} = \phi(p^k)$. • Euler's totient function counts the number of elements $\pmod p^k$ that have a $gcd(x,p^k) = 1$. Thus, $0 \leq x \leq p^k -1$ are the only candidate elements to be in the set (We don't have to exclude 0 yet, because it will be excluded by your counting argument). Thus, now you apply your counting argument and m's range is $0 \leq m \leq p^{k-1} - 1$. Which yields the correct bound but is "more" correct. Right? Also, your first sentence in your second paragraph is hard to follow, do you mind rewriting that? (Thanks btw, that was super helpful). – Pinocchio Jan 13 '14 at 5:54 • Yes, the range $1 \leq m \leq p^k$ is the same, modulo $p^k$, as the range $0 \leq m \leq p^k - 1$ (and the latter is preferable). I have changed the "iff"s to symbols, in an attempt to improve legibility – zcn Jan 13 '14 at 6:14 • If the second one is preferred then why did you write the other one? – Pinocchio Jan 13 '14 at 6:18 • Starting at 0 is preferable if your definition counts elements of $\mathbb{Z}/p^k\mathbb{Z}$, but starting at 1 is preferable if your definition counts elements of $\mathbb{N}$. I have opted for the latter, but as the two are equivalent, it really is a matter of personal choice – zcn Jan 13 '14 at 6:23 $p^k$ only shares common factors with other multiples of $p$. How many multiples of $p$ are there under $p^k$?: $\frac{p^k}{p}=p^{k-1}$ Just look at 8 for example. There are 4 multiples of 2 under and including 8. To get the totient value by definition exclude from $p^k$ all the values under it that share a common factor, of which there are $p^{k-1}$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668745151689, "lm_q1q2_score": 0.85591835036074, "lm_q2_score": 0.8723473779969194, "openwebmath_perplexity": 243.2411033383979, "openwebmath_score": 0.8835917115211487, "tags": null, "url": "https://math.stackexchange.com/questions/629933/properties-of-the-euler-totient-function/636466" }
• I am sorry if its extremely obvious to you, but if you provide further detail on why $p^k/p$ "works", I will be happy. It makes sense that you want to exclude element that are multiples of p. I'm not sure why just dividing $p^k$ by p works. Is $p^k$ the number of element between 1 to $p^k$ or the elements 1 from $p^{k-1}$? why did dividing by p "worked"? Basically I am having trouble counting the number of multiples of p rigorously or precisely. – Pinocchio Jan 13 '14 at 5:37 • $p^k$ is an integer. An integer $n$ also denotes the size of the set of integers 1 through $n$, inclusive. Now starting from 1, say you jump to every multiple of $p$. You will be jumping a size of p every time first to p then to 2p then to 3p....So you have an integer $n=p^k$ and you want to see how many times you can make a jump of size p inside n. The division operation is defined to do just that. How many times does 3 fit in 12? 4 times. How many times does p fit in $p^k$? $p^{k-1}$ Does that make sense? – Flowers Jan 13 '14 at 7:06 • The sets {1,2,...p}, {p+1,p+2,...2p}, {2p+1,2p+2,...3p}... {p^{k}-p+1,p^{k}-p+2,...p^{k}}are all size p and each one has only one multiple of p. – Flowers Jan 13 '14 at 7:15
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668745151689, "lm_q1q2_score": 0.85591835036074, "lm_q2_score": 0.8723473779969194, "openwebmath_perplexity": 243.2411033383979, "openwebmath_score": 0.8835917115211487, "tags": null, "url": "https://math.stackexchange.com/questions/629933/properties-of-the-euler-totient-function/636466" }
# Finding the sum of a series 1. Jun 11, 2005 ### steven187 hello all you know we have all these tests for convergence of a series, but it always made me wonder if there exists any other method of finding the sum of a series, like we have the geometric series in which we are able to find the sum by a simple formula, but are we able to find such a formula for any series?, would such a formula be unique to each particular series?, or to a group of such series? for example $$\sum_{n=1}^{\infty}\frac {(-2)^{n}}{n+1}$$ this isnt a geometric series then how would you find the sum of such series? is it possible to derive a formula for any series that exist, or are there limitations or conditions that need to be satisfied? 2. Jun 11, 2005 ### shmoe There is no sure fire general way to find the sum of an infinite series in a 'nice' form. If a series converges and you have enough time on your hands you can find as many decimal places as you like, but an exact, nice looking solution is not always possible (indeed your series might not even converge to something 'nice'). The series you've provided doesn't converge by the way, the terms don't even go to 0 (a necessary but not sufficient condition). 3. Jun 11, 2005 ### steven187 so your sayin that it is possible in some cases to derive a formula for a series besides a geometric series but in most cases its not possible? , and about that series it was suppose to be $$\sum_{n=1}^{\infty}\frac {(-2)^{-n}}{n+1}$$ like would i be able to derive a formula for this series? or how would i be able to find the sum of such a series? 4. Jun 11, 2005 ### saltydog It's a very interesting question. There are various techniques for solving such. For example, $$\sum_{n=1}^{\infty} \frac{1}{n^2}=\frac{\pi^2}{6}$$ This can be solved by using Fourier coefficients and Parseval's Theorem. $$\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}k=ln(2)$$ is shown by considering: $$f(x)=ln(1+x)$$
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668662546613, "lm_q1q2_score": 0.8559183366422911, "lm_q2_score": 0.8723473713594992, "openwebmath_perplexity": 909.2142263740213, "openwebmath_score": 0.848241925239563, "tags": null, "url": "https://www.physicsforums.com/threads/finding-the-sum-of-a-series.78754/" }
$$\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}k=ln(2)$$ is shown by considering: $$f(x)=ln(1+x)$$ and differentiating and considering the Taylor series (thanks Daniel). $$\sum_{k=1}^{\infty}\frac{1}{4k(2k-1)}$$ is solved by considering the sum: $$S=\frac{1}{2}[(1-1/2)+(1/3-1/4)+(1/5-1/6)+...$$ and: $$\sum_{n=0}^{\infty} ne^{-an}$$ is solved by considering: $$z=\sum_{n=0}^{\infty}w^n$$ with: $$w=e^{-a}$$ and differentiating both the sum and the expression for the sum of the corresponding geometric series with respect to w. Tons more I bet. Would be nice to have a compilation of the various methods for calculating infinite sums. 5. Jun 11, 2005 ### shmoe Yes. saltydog's given numerous examples. Telescoping series is another handy one, so is recognizing your sum as a known power series, rearranging terms sometimes helps, and more . Relate it to the power series for log(1+x). 6. Jun 12, 2005 ### HallsofIvy Staff Emeritus Some times you can recognize a series as a special case of a Taylor's series for a function. To sum the particular series you give here, I would recall that the Taylor's series for ln(x+1) is $$\sum_{n=1}^{\infty}(-1)^n \frac{x^n}{n}$$ (and converges for -1< x< 1). That differs from your series on having n+1 in the denominator: Okay, change the numbering slightly. If we let i= n+1 then n= i-1 and your series becomes $$\sum_{i=2}^{\infty}\frac{(-2)^{-i+1}}{i}$$. I can just take that (-2)1 out of the entire series: $$(-2)\sum_{i=2}^{\infty}\frac{(-2)^{-i}}{i}$$. We're almost there! Since I need (-2)-i instead of xi, take x= -1/2 which is in the radius of convergence. The fact that the sum now starts at i= 2 instead of 1 is not a problem: just calculate that separately- when i= 0 the term is (-2)0/1= -2: Your sum is $$(-2)\sum{i=1}\frac{(-1)^i}{i}+ 2)= (-2)ln(-1/2+ 1)+ 2= -2ln(1/2)+ 2= 2+ 2 ln(2)= 2(1+ ln(2))$$. Last edited: Jun 12, 2005 7. Jun 12, 2005 ### saltydog Using Hall's method I obtained a different answer: For this sum:
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668662546613, "lm_q1q2_score": 0.8559183366422911, "lm_q2_score": 0.8723473713594992, "openwebmath_perplexity": 909.2142263740213, "openwebmath_score": 0.848241925239563, "tags": null, "url": "https://www.physicsforums.com/threads/finding-the-sum-of-a-series.78754/" }
### saltydog Using Hall's method I obtained a different answer: For this sum: $$\sum_{n=1}^{\infty}\frac{(-2)^{-n}}{n+1}$$ Consider: $$ln(1+x)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}x^n}{n}\quad\text{for}\quad-1<x<1$$ Letting $x=\frac{1}{2}$ $$ln(3/2)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{2^n n}$$ Letting i=n-1 we obtain: \begin{align*} ln(3/2)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{2^n n}&=\sum_{i=0}^{\infty}\frac{(-1)^{i+2}}{2^{i+1} (i+1)}\\ &=\frac{1}{2}+\sum_{i=1}^{\infty}\frac{(-1)^2 (-1)^i}{(2)2^i(i+1)}\\ &=\frac{1}{2}+\frac{1}{2}\sum_{i=1}^{\infty}\frac{(-1)^i}{2^i(i+1)}\\ &=\frac{1}{2}+\frac{1}{2}\sum_{i=1}^{\infty}\frac{(-2)^{-i}}{i+1} \end{align} Solving for the series, I obtain: $$\sum_{i=1}^{\infty}\frac{(-2)^{-i}}{(i+1)}=-1+2ln(3)-2ln(2)$$ This result agrees with Mathematica. 8. Jun 12, 2005 ### shmoe This is correct, at least it's also what I had. Halls has a couple small mistakes, he has the series for -log(1+x), not log(1+x), and used x=-1/2 when it should have been x=1/2. Also the 'missing term' is then $$(-2)(1/2)^1/1=-1$$. Notice it's an alternating series with decreasing terms and the first term is negative, therefore the sum must also be negative. You could go furthur if you like, the sum must be larger than $$-\frac{1}{2\times 2}=-\frac{1}{4}$$ and smaller than $$-\frac{1}{2\times 2}+\frac{1}{2^2\times 3}=-\frac{1}{6}$$, and so on if you wanted some more reassurance that your answer was correct (an alternating series with decreasing terms will hop back and forth between being greater and less than the sum). Of course this sort of estimate won't prove your answer is correct, but can catch mistakes. Last edited: Jun 12, 2005 9. Jun 13, 2005 ### steven187 hello all
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668662546613, "lm_q1q2_score": 0.8559183366422911, "lm_q2_score": 0.8723473713594992, "openwebmath_perplexity": 909.2142263740213, "openwebmath_score": 0.848241925239563, "tags": null, "url": "https://www.physicsforums.com/threads/finding-the-sum-of-a-series.78754/" }
Last edited: Jun 12, 2005 9. Jun 13, 2005 ### steven187 hello all now I realise that methods of finding the sum of an infinite series is limited, so all we have is, telescoping ,geometric series, rearranging terms (could someone give me an example of this thanxs), manipulation, fourier series- (which i can bearly remember) is there anymore? also what is the name of the method saltydog was originally using? I didnt really understand it but i have to admit anything unusual is interesting, saltydog please help? thanxs well after making up a few unusual series and trying to derive the sum, for the majority of them i couldnt derive anything except for one of them , and here it is please update me on what you think? $$\sum_{k=0}^{\infty}\left(\begin{array}{cc}k \\r \end{array}\right)x^{k-r}=\frac{1}{(1-x)^{r+1}}$$ took me ages to get this result, I shall call it the steven zeta series lol. does anybody have any other examples of such results? Last edited: Jun 13, 2005 10. Jun 13, 2005 ### saltydog Steven . . . I struggle with these and a lot of other things in the forum. I made sure to give Hall credit above for that method. He suggested the technique and I followed through. I take 1 point credit for that work. Really I suspect there is some compilation which goes into more detail with sums. I would expect that even infinite sums could/should/would be arranged into groupings similar to integrals and differential equations and solved using specific technique applicable to membership in a particular class like we do for integrals. Anyway, I think if someone intensively investigated them this could be done to some measure of success. Interesting problems you bring up here though. 11. Jun 13, 2005 ### steven187 hello all well i didnt really understaand what you ment by this, what is the problem being solved here, or what method is it?
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668662546613, "lm_q1q2_score": 0.8559183366422911, "lm_q2_score": 0.8723473713594992, "openwebmath_perplexity": 909.2142263740213, "openwebmath_score": 0.848241925239563, "tags": null, "url": "https://www.physicsforums.com/threads/finding-the-sum-of-a-series.78754/" }
well now Iv been searching the net for ages to find some kind of summary of all the different types of series, methods of finding the sum of a series, and special cases of series with there sums like the power, euler, Laurent, alternating ,arithmetic, hypergeometric series, Maclaurin ,binomial, taylor, harmonic, riemann, geometric, fourier, pathological,Asymptotic Series, Dirichlet L-series, Multiple Series, Hyperasymptotic Series and Superasymptotic Series( i dont think there is anymore) oh yes and the steven zeta series, well out of hours of searching i only found one document but that wasnt even good enough, does anybody have any links?, well saltydog I hope that there is some kind of compilation or else it looks like I might have to compile my own, well as to the original problem this is how i did it enjoy $$\int_0^x\sum_{n=0}^{\infty}z^n dz=\int_0^x \frac{1}{1-z} dz$$ $$\sum_{n=0}^{\infty}\int_0^x z^n dz=\int_0^x \frac{1}{1-z} dz$$ $$\sum_{n=0}^{\infty}\frac{x^{n+1}}{n+1}=-\log{(1-x)}$$ now we substitute $$x=\frac{-1}{2}$$ in which after manipulating it we get $$\sum_{n=1}^{\infty}\frac{(-2)^{-n}}{(n+1)}=-1+2\log[\frac{3}{2}]$$ Last edited: Jun 13, 2005 12. Jun 13, 2005 ### TenaliRaman I believe you realise that your first few statements are to be justified even though they are correct. -- AI 13. Jun 14, 2005 ### saltydog Thanks Steven. That's very nice. Tenali, you mind explaining what needs to be justified? 14. Jun 14, 2005 ### steven187 hello thanxs saltydog, well about what needs to be justified is that to swap the summation sign and the integral sign it needs to satisfy the conditions of the dominated convergence theorem i dont think there is anythin else to justify Steven Last edited: Jun 14, 2005
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668662546613, "lm_q1q2_score": 0.8559183366422911, "lm_q2_score": 0.8723473713594992, "openwebmath_perplexity": 909.2142263740213, "openwebmath_score": 0.848241925239563, "tags": null, "url": "https://www.physicsforums.com/threads/finding-the-sum-of-a-series.78754/" }
# Evaluating $\frac{1}{2 \pi i} \int_{|z|=3} \frac{e^{\pi z}}{z^2(z^2+2z+2)}dz$ I need some help with Complex Analysis: To evaluate the integral $$\frac{1}{2 \pi i} \int_{|z|=3} \frac{e^{\pi z}}{z^2(z^2+2z+2)}dz$$ Here is what I tried: So, $f(z)=\frac{e^{\pi z}}{z^2(z^2+2z+2)}$ has 3 singularities, $z_0 = 0$, $z_0= -1+i$ and $z_0=-1-i$. And all of them are interior to the contour. $\Rightarrow$ We need to find the residues at all the 3 points. $$Res_{z_0=0}(f(z)) = Res_{z_0=0} \left (\frac{e^{\pi z}}{z^2(z^2+2z+2)} \right)\\=\frac{1}{2}(\pi-1)$$ $$Res_{z_0=-1+i}(f(z)) = Res_{z_0=-1+i} \left (\frac{e^{\pi z}}{z^2(z^2+2z+2)} \right)\\=-\frac{e^{-\pi}}{4}$$ $$Res_{z_0=-1-i}(f(z)) = Res_{z_0=-1-i} \left (\frac{e^{\pi z}}{z^2(z^2+2z+2)} \right)\\=-\frac{e^{-\pi}}{4}$$ $$\Rightarrow \frac{1}{2\pi i}\int_{|z|=3} \frac{e^{\pi z}}{z^2(z^2+2z+2)}dz = \frac{1}{2 \pi i} \times \left[ 2\pi i \left(\frac{1}{2}(\pi -1) - \frac{e^{-\pi}}{4} - \frac{e^{-\pi}}{4}\right)\right]\\ = \left( \frac{1}{2}(\pi-1)-\frac{e^{-\pi}}{2}\right)\\ = \frac{\pi-1-e^{-\pi}}{2}$$ But I was told that this was wrong. I couldn't find any mistakes for my work. Can any of you check to see if my work is valid? Or, if it is wrong, how else can I evaluate this? Any helps or comments would be appreciated. Can someone provide a valid solution (or a different approach)to evaluate this integral? Because some comments say it is wrong. But, some say it is correct. I am confused.... • the pole at $z=0$ is double – G Cab Aug 23 '17 at 22:36 • Will it make any difference? Since pole is z=o and z=0. for z^2 – SirBanana Aug 23 '17 at 22:41 • It does matter, cuz the formulas differ – Brevan Ellefsen Aug 23 '17 at 22:48 • Yes, but its "influence" in the "surrounding" is "double" – G Cab Aug 23 '17 at 22:49 So checking the residues...
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668684574637, "lm_q1q2_score": 0.8559183320514828, "lm_q2_score": 0.8723473647220786, "openwebmath_perplexity": 301.51941221523805, "openwebmath_score": 0.8723546266555786, "tags": null, "url": "https://math.stackexchange.com/questions/2404030/evaluating-frac12-pi-i-int-z-3-frace-pi-zz2z22z2dz" }
So checking the residues... The pole at $z=0$ is double, so (in my opinion) it's easier to find the residue via the series expansion than with the explicit formula. Factoring the denominator, the integrand is $\frac{e^{\pi z}}{z^2(z+1-i)(z+1+i)}$. The series for $e^{\pi z}$ is just $1 + \pi z + \frac{\pi^2 z^2}{2!} + ...$ The other factors except for $\frac{1}{z^2}$ are of the form $\frac{1}{a + z}$, which has the series representation $\frac{1}{a} - \frac{z}{a^2}+...$ So the integrand, as a product of series, is $\frac{1}{z^2}(1 + \pi z + ...)(\frac{1}{1-i} - \frac{z}{(1-i)^2} + ...)(\frac{1}{1+i} - \frac{z}{(1+i)^2} + ...)$ Since we're looking for the residue, the only term we care about is $z^{-1}$. Since $\frac{1}{z^2}$ is the only factor with negative exponents, we only need the first two terms of each series. Finding the terms that would have order $-1$, we find the coefficient to be $\frac{\pi}{(1+i)(1-i)} - \frac{1}{(1-i)^2(1+i)} - \frac{1}{(1+i)^2(1-i)}$. The first term is $\frac{\pi}{2}$, and the second and third terms combine to $-\frac{1}{2(1+i)} - \frac{1}{2(1-i)} = -\frac{1-i}{4} - \frac{1+i}{4} = -\frac{1}{2}$. So the first residue is (unless I've also made a horrible mistake) $\frac{\pi - 1}{2}$, as you found. Now for the next two residues: since these are both simple poles, they can be evaluated easily with the explicit formula. The second residue should be $\frac{e^{\pi z}}{z^2(z+1-i)}$ evaluated at $z = -1 - i$, and the third is similar. Plugging in, the second residue is $\frac{e^{\pi (-1 - i)}}{(-1-i)^2(-1-i+1-i)} = \frac{e^{-\pi}e^{-i\pi}}{(2i)(-2i)}$ which is $-\frac{e^{-\pi}}{4}$. The third is similar, and I also found it also matches what you have. Maybe we both made the same mistakes, but your evaluation looks right to me. I do not see anything wrong with your calculation.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668684574637, "lm_q1q2_score": 0.8559183320514828, "lm_q2_score": 0.8723473647220786, "openwebmath_perplexity": 301.51941221523805, "openwebmath_score": 0.8723546266555786, "tags": null, "url": "https://math.stackexchange.com/questions/2404030/evaluating-frac12-pi-i-int-z-3-frace-pi-zz2z22z2dz" }
I do not see anything wrong with your calculation. But Maple gets $$\frac{\pi}{2} - \frac{1}{2} - \frac{1}{4e^{\pi}}$$ so the last term disagrees with yours. But numerically in Maple, the same integral agrees with your answer, not Maple's symbolic answer. Some sort of bug in Maple, I guess. (Maple 2015 build 1097895) • This just confuses me even more.... – SirBanana Aug 23 '17 at 23:41
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668684574637, "lm_q1q2_score": 0.8559183320514828, "lm_q2_score": 0.8723473647220786, "openwebmath_perplexity": 301.51941221523805, "openwebmath_score": 0.8723546266555786, "tags": null, "url": "https://math.stackexchange.com/questions/2404030/evaluating-frac12-pi-i-int-z-3-frace-pi-zz2z22z2dz" }
# What is the difference between "expectation", "variance" for statistics versus probability textbooks? It seems that there are two ideas of expectation, variance, etc. going on in our world. In any probability textbook: I have a random variable $$X$$, which is a function from the sample space to the real line. Ok, now I define the expectation operator, which is a function that maps this random variable to a real number, and this function looks like, $$\mathbb{E}[X] = \sum\limits_{i = 1}^n x_i p(x_i)$$ where $$p$$ is the probability mass function, $$p: x_i \mapsto [0,1], \sum_{i = 1}^n p(x_i) = 1$$ and $$x_i \in \text{range}(X)$$. The variance is, $$\mathbb{E}[(X - \mathbb{E}[X])^2]$$ The definition is similar for a continuous RV. However, in statistics, data science, finance, bioinformatics (and I guess everyday language when talking to your mother) I have a multi-set of data $$D = \{x_i\}_{i = 1}^n$$ (weight of onions, height of school children). The mean of this dataset is $$\dfrac{1}{n}\sum\limits_{i= 1}^n x_i$$ The variance of this dataset (according to "science buddy" and "mathisfun dot com" and government of Canada) is, $$\dfrac{1}{n}\sum\limits_{i= 1}^n(x_i - \sum\limits_{j= 1}^n \dfrac{1}{n} x_j)^2$$ I mean, I can already see what's going on here (one is assuming uniform distribution), however, I want an authoritative explanation on the following: 1. Is the distinction real? Meaning, is there a universe where expectation/mean/variance... are defined for functions/random variables and another universe where expectation/mean/variance... are defined for raw data? Or are they essentially the same thing (with hidden/implicit assumption) 2. Why is it no probabilistic assumption is made when talking about mean or variance when it comes to dealing with data in statistics or data science (or other areas of real life)?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
3. Is there some consistent language for distinguishing these two seemingly different mean and variance terminologies? For example, if my cashier asks me about the "mean weight" of two items, do I ask him/her for the probabilistic distribution of the random variable whose realization are the weights of these two items (def 1), or do I just add up the value and divide (def 2)? How do I know which mean the person is talking about?/ • There's no probabilistic assumption about the sample mean or sample variance because they're just numbers computed from the sample data. If you measure the heights of $100$ people, you have $100$ numbers and you can do whatever computations you like with them. Dec 23, 2020 at 5:07 • Can you explain what you what you mean that 'uniform distribution' was assumed? Isnt p(x_i)=lim n->infty #occurences of x_i/n ? So an approximation of p(x_i) is #occurences of x_i /n. So 1/n sum x_i = 1/n sum_{x_i} x_i #occurences of x_i = sum x_i p(x_i). Of course p(x_i) is only an approximation of p(x_i) in statistics. I do not see how uniformity was used here Dec 23, 2020 at 16:25 • "Why is it no probabilistic assumption is made when talking about mean or variance when it comes to dealing with data in statistics or data science " Can you be more clear as to what assumption you think is made elsewhere, but isn't made in data science? Dec 24, 2020 at 20:46 • @lalala If you take the average off $n$ values, that is equal to the expected value of a random variable that has a uniform distribution across those alues. Dec 24, 2020 at 20:47 You ask a very insightful question that I wish were emphasized more often. EDIT: It appears you are seeking reputable sources to justify the above. Sources and relevant quotes have been provided. Here's how I would explain this:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
Here's how I would explain this: • In probability, the emphasis is on population models. You have assumptions that are built-in for random variables, and can do things like saying that "in this population following such distribution, the probability of this value is given by the probability mass function." • In statistics, the emphasis is on sampling models. With most real-world data, you do not have access to the data-generating process governed by the population model. Probability provides tools to make guesses on what the data-generating process might be. But there is always some uncertainty behind it. We therefore attempt to estimate characteristics about the population given data. From Wackerly et al.'s Mathematical Statistics with Applications, 7th edition, chapter 1.6: The objective of statistics is to make an inference about a population based on information contained in a sample taken from that population... A necessary prelude to making inferences about a population is the ability to describe a set of numbers... The mechanism for making inferences is provided by the theory of probability. The probabilist reasons from a known population to the outcome of a single experiment, the sample. In contrast, the statistician utilizes the theory of probability to calculate the probability of an observed sample and to infer this from the characteristics of an unknown population. Thus, probability is the foundation of the theory of statistics. From Shao's Mathematical Statistics, 2nd edition, section 2.1.1:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
From Shao's Mathematical Statistics, 2nd edition, section 2.1.1: In statistical inference... the data set is viewed as a realization or observation of a random element defined on a probability space $$(\Omega, \mathcal{F}, P)$$ related to the random experiment. The probability measure $$P$$ is called the population. The data set or random element that produces the data is called a sample from $$P$$... In a statistical problem, the population $$P$$ is at least partially unknown and we would like to deduce some properties of $$P$$ based on the available sample. So, the probability formulas of the mean and variance assume you have sufficient information about the population to calculate them. The statistics formulas for the mean and variance are attempts to estimate the population mean and variance, given a sample of data. You could estimate the mean and variance in any number of ways, but the formulas you've provided are some standard ways of estimating the population mean and variance. Now, one logical question is: why do we choose those formulas to estimate the population mean and variance? For the mean formula you have there, one can observe that if you assume that your $$n$$ observations can be represented as observed values of independent and identically distributed random variables $$X_1, \dots, X_n$$ with mean $$\mu$$, $$\mathbb{E}\left[\dfrac{1}{n}\sum_{i=1}^{n}X_i \right] = \mu$$ which is the population mean. We say then that $$\dfrac{1}{n}\sum_{i=1}^{n}X_i$$ is an "unbiased estimator" of the population mean. From Wackerly et al.'s Mathematical Statistics with Applications, 7th edition, chapter 7.1: For example, suppose we want to estimate a population mean $$\mu$$. If we obtain a random sample of $$n$$ observations $$y_1, y_2, \dots, y_n$$, it seems reasonable to estimate $$\mu$$ with the sample mean $$\bar{y} = \dfrac{1}{n}\sum_{i=1}^{n}y_i$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
The goodness of this estimate depends on the behavior of the random variables $$Y_1, Y_2, \dots, Y_n$$ and the effect this has on $$\bar{Y} = (1/n)\sum_{i=1}^{n}Y_i$$. Note. In statistics, it is customary to use lowercase $$x_i$$ to represent observed values of random variables; we then call $$\frac{1}{n}\sum_{i=1}^{n}x_i$$ an "estimate" of the population mean (notice the difference between "estimator" and "estimate"). For the variance estimator, it is customary to use $$n-1$$ in the denominator, because if we assume the random variables have finite variance $$\sigma^2$$, it can be shown that $$\mathbb{E}\left[\dfrac{1}{n-1}\sum_{i=1}^{n}\left(X_i - \dfrac{1}{n}\sum_{j=1}^{n}X_j \right)^2 \right] = \sigma^2\text{.}$$ Thus $$\dfrac{1}{n-1}\sum_{i=1}^{n}\left(X_i - \dfrac{1}{n}\sum_{j=1}^{n}X_j \right)^2$$ is an unbiased estimator of $$\sigma^2$$, the population variance. It is also worth noting that the formula you have there has expected value $$\dfrac{n-1}{n}\sigma^2$$ and $$\dfrac{n-1}{n} < 1$$ so on average, it will tend to underestimate the population variance. From Wackerly et al.'s Mathematical Statistics with Applications, 7th edition, chapter 7.2: For example, suppose that we wish to make an inference about the population variance $$\sigma^2$$ based on a random sample $$Y_1, Y_2, \dots, Y_n$$ from a normal population... a good estimator of $$\sigma^2$$ is the sample variance $$S^2 = \dfrac{1}{n-1}\sum_{i=1}^{n}(Y_i - \bar{Y})^2\text{.}$$ The estimators for the mean and variance above are examples of point estimators. From Casella and Berger's Statistical Inference, Chapter 7.1:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
The rationale behind point estimation is quite simple. When sampling is from a population described by a pdf or pmf $$f(x \mid \theta)$$, knowledge of $$\theta$$ yields knowledge of the entire population. Hence, it is natural to seek a method of finding a good estimator of the point $$\theta$$, that is, a good point estimator. It is also the case that the parameter $$\theta$$ has a meaningful physical interpretation (as in the case of a population) so there is direct interest in obtaining a good point estimate of $$\theta$$. It may also be the case that some function of $$\theta$$, say $$\tau(\theta)$$ is of interest. There is, of course, a lot more that I'm ignoring for now (and one could write an entire textbook, honestly, on this topic), but I hope this clarifies things. Note. I know that many textbooks use the terms "sample mean" and "sample variance" to describe the estimators above. While "sample mean" tends to be very standard terminology, I disagree with the use of "sample variance" to describe an estimator of the variance; some use $$n - 1$$ in the denominator, and some use $$n$$ in the denominator. Also, as I mentioned above, there are a multitude of ways that one could estimate the mean and variance; I personally think the use of the word "sample" used to describe such estimators makes it seem like other estimators don't exist, and is thus misleading in that way. ## In Common Parlance This answer is informed primarily by my practical experience in statistics and data analytics, having worked in the fields for about 6 years as a professional. (As an aside, I find one serious deficiency with statistics and data analysis books is providing mathematical theory and how to approach problems in practice.)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
Is there some consistent language for distinguishing these two seemingly different mean and variance terminologies? For example, if my cashier asks me about the "mean weight" of two items, do I ask him/her for the probabilistic distribution of the random variable whose realization are the weights of these two items (def 1), or do I just add up the value and divide (def 2)? How do I know which mean the person is talking about? In most cases, you want to just stick with the statistical definitions. Most people do not think of statistics as attempting to estimate quantities relevant to a population, and thus are not thinking "I am trying to estimate a population quantity using an estimate driven by data." In such situations, people are just looking for summaries of the data they've provided you, known as descriptive statistics. The whole idea of estimating quantities relevant to a population using a sample is known as inferential statistics. While (from my perspective) most of statistics tends to focus on statistical inference, in practice, most people - especially if they've not had substantial statistical training - do not approach statistics with this mindset. Most people whom I've worked with think "statistics" is just descriptive statistics. In descriptive data analysis, a few summary measures may be calculated, for example, the sample mean... and the sample variance... However, what is the relationship between $$\bar{x}$$ and $$\theta$$ [a population quantity]? Are they close (if not equal) in some sense? The sample variance $$s^2$$ is clearly an average of squared deviations of $$x_i$$'s from their mean. But, what kind of information does $$s^2$$ provide?... These questions cannot be answered in descriptive data analysis. ## Other remarks about the sample mean and sample variance formulas Let $$\bar{X}_n$$ and $$S^2_n$$ denote the sample mean and sample variance formulas provided earlier. The following are properties of these estimators:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
• They are unbiased for $$\mu$$ and $$\sigma^2$$, as explained earlier. This is a relatively simple probability exercise. • They are consistent for $$\mu$$ and $$\sigma^2$$. Since you know measure theory, assume all random variables are defined over a probability space $$(\Omega, \mathcal{F}, P)$$. It follows that $$\bar{X}_n \overset{P}{\to} \mu$$ and and $$S^2_n \overset{P}{\to} \sigma^2$$, where $$\overset{P}{\to}$$ denotes convergence in probability, also known as convergence with respect to the measure $$P$$. See https://math.stackexchange.com/a/1655827/81560 for the sample variance (observe that the estimator with the $$n$$ in the denominator is used here; simply multiply by $$\dfrac{n-1}{n}$$ and apply a result by Slutsky) and Proving a sample mean converges in probability to the true mean for the sample mean. As a stronger result, convergence is almost sure with respect to $$P$$ in both cases (Sample variance converge almost surely). • If one assumes $$X_1, \dots, X_n$$ are independent and identically distributed based on a normal distribution with mean $$\mu$$ and variance $$\sigma^2$$, one has that $$\dfrac{\sqrt{n}(\bar{X}_n - \mu)}{\sqrt{S_n^2}}$$ follows a $$t$$-distribution with $$n-1$$ degrees of freedom, which converges in distribution to a normally-distributed random variable with mean $$0$$ and variance $$1$$. This is a modification of the central limit theorem.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
• If one assumes $$X_1, \dots, X_n$$ are independent and identically distributed based on a normal distribution with mean $$\mu$$ and variance $$\sigma^2$$, $$\bar{X}_n$$ and $$S^2_n$$ are uniformly minimum-variance unbiased estimators (UMVUEs) for $$\mu$$ and $$\sigma^2$$ respectively. It also follows that $$\bar{X}_n$$ and $$S^2_n$$ are independent, through - as mentioned by Michael Hardy - showing that $$\text{Cov}(\bar{X}_n, X_i - \bar{X}_n) = 0$$ for each $$i = 1, \dots, n$$, or as one can learn from more advanced statistical inference courses, an application of Basu's Theorem (see, e.g., Casella and Berger's Statistical Inference). • I believe the expressions for mean and variance estimators should be accompanied with a word or two on the central limit theorem. Dec 23, 2020 at 5:52 • You wrote: "If one assumes $X_1,\ldots,X_n$ are normally distributed, $\bar X_n$ and $S_n^2$ are" UMVUEs. (This after assuming they are i.i.d.) I would add that under those assumptions $\bar X_n$ and $S_n^2$ are independent, and that is used in deriving the t-distribution. One of the quickest ways to show that involves showing that $\operatorname{cov} \left(\,\overline X, X_i-\overline X\,\right)=0.$ When $U,V$ are both linear combinations of i.i.d. normals, then their covariance is $0$ only if they are independent. Dec 26, 2020 at 21:47 • @MichaelHardy Thanks for correcting me; I've put in the edits. Dec 26, 2020 at 22:08 • @Clarinetist : I see you say it's "through an application of Basu's theorem," but nothing so advanced as Basu's theorem is needed, since the method described in my comment above is enough. It seems misleading to tacitly suggest that you can't learn to prove that proposition until you learn Basu's theorem. Dec 26, 2020 at 22:11 • @MichaelHardy Thanks, I've corrected that as well. Dec 26, 2020 at 22:13
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
The first definitions you gave are correct and standard, and statisticians and data scientists will agree with this. (These definitions are given in statistics textbooks.) The second set of quantities you described are called the "sample mean" and the "sample variance", not mean and variance. Given a random sample from a random variable $$X$$, the sample mean and sample variance are natural ways to estimate the expected value and variance of $$X$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
• Thanks! So the distinction is one is "sample" mean/variance the other is mean/variance as it is. And also those formulas are estimate of the value of mean and variance. But isn't it true when people are working with them in practice, say using some data science package such as Scikit-Learn, the underlying probabilistic assumption (such as distribution of data) is dropped as if nothing ever happened? I have never seen anyone writing a documentation mentioning the probabilistic definition. I think this causes confusion for young practictioners, don't you think? Dec 23, 2020 at 5:39 • Imagine in the shoes of a young data scientist who just took a course on Lebesgue integration or measure theory. You spent 4 month talking about these abstract definitions, and then integrating these really hard integrals just to get the mean, using various tricks just to make these integrals tractable. You get your first job and your boss tells you just to add stuff up and divide. Isn't this confusing? Dec 23, 2020 at 5:41 • @Norman My response to your comments here is that probability is not a statistics class, and trying to act as if that is the case would be lying. Additionally, most managers have not taken courses nor understand the difference between probability and statistics, and are focused solely on descriptive statistics to understand empirically the behavior of their data. They calculate a sample average because that's what they habitually have done, and they think it is an adequate summary of the data. Dec 23, 2020 at 13:25 • @Norman I could go on a very long rant about how poorly taught statistics is in many schools and how very little guidance is given on how to approach statistical problems in practice, but I will avoid that for now. Dec 23, 2020 at 13:27
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
• @Norman I think it can be a source of confusion that people often say mean or variance when they really mean "sample mean" or "sample variance". One of the challenges of coming into the engineering world from a pure math background is that often people speak much less precisely than one is used to. Dec 23, 2020 at 15:23
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
Other answers — particularly Clarinetist’s — give excellent rundowns of the most important side of the answer. Given a random variable, we can sample it, and use the sample mean (defined in the statistical sense) to estimate the actual mean of the random variable (defined in the sense of probability theory), and similarly for variance, etc. But the connection in the other direction doesn’t seem to have been mentioned yet. This is not as important, but it’s much more straightforward, and worth pointing out. Given a sample, i.e. a finite multiset of values $$\{x_i\}_{i \in I}$$, we can “consider this as a distribution”, i.e. take a random variable $$X$$, with value $$x_i$$ for $$i$$ distributed uniformly over $$I$$. Then the mean, variance, etc. of $$X$$ (in the sense of probability theory) will be precisely the mean, variance, etc. of the original multiset (defined in the statistical sense). The general expression for arithmetic mean is $$\frac{\sum\limits_{i= 1}^n w_i x_i}{\sum\limits_{i= 1}^n w_i}$$, or even more generally, $$\frac{\int w(t) f(t)dt}{\int f(t)dt}$$ (there are then ways to recover the discrete case from that). If you set all $$w_i$$ to $$1$$, or really to anything as long as it's constant, you get $$\dfrac{1}{n}\sum\limits_{i= 1}^n x_i$$. This is referred to as an "unweighted" average, although technically it's still weighted, it's just that you're multiplying everything by $$1$$, so you don't notice it. If you set $$p_i = \frac{w_i}{ \sum\limits_{k= 1}^n w_k}$$, and interpret $$p_i$$ as the probability of the $$i$$th event, then you get the average weighted by probability, which is also known as expected value.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
You have to be careful about "unweighted" averages, as they often actually are weighted, but by weights that you didn't want. For instance, suppose you want the average income over the US, and you have the average income for each state individually. You could just add all of those averages together, and then divide by $$50$$. People will often call this the "unweighted" or "simple" average, but you're actually weighting people by the reciprocal of their states population; the fewer people there are in a state, the more each person's income affects that state's average, and so the more they affect the total "unweighted average". As a result, to get the actual overall average income from the individual states' averages, you have to multiply each state's average by its population to get its total income, add all of those together, and then divide by the total population. A common weighting you'll see is frequency weighting. This is where you multiply each value by the number of times it appears. For instance, if you measure something once a month for a year, and the only values you get are $$0$$, $$1$$, and $$2$$, taking the simple average of those values gives you $$1$$. But that's probably not the real average. To get a more meaningful average, you should take each of these values, weight them by how many months they appear for, and then take the average. One property of weighted averages is that multiplying all of the weights by a constant number doesn't change the final result (you'll just divide it out again when you divide by the total of the weights). So weighting by the frequencies is equivalent to weighting by the percentage of cases each value is. That is, if $$0$$ is the value for $$5$$ months, $$1$$ is the value for $$4$$ months, and $$2$$ is the value for $$3$$, the weighting of $$5,4,3$$ is equivalent to the weighting of $$\frac 5 {12},\frac 4{12},\frac 3{12}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
So if you have a probability distribution where one thing has a $$60$$% chance of happening, and something else has a $$40$$% chance of happening, the expected value is simply the frequency weighted average. For example, if my cashier asks me about the "mean weight" of two items, do I ask him/her for the probabilistic distribution of the random variable whose realization are the weights of these two items (def 1), or do I just add up the value and divide (def 2)? How do I know which mean the person is talking about? The expected value of a random variable is the expected value of that random variable. It is a property of the distribution. If you're asked for the expected value of a random variable, you find the expected value of that random variable. If you aren't asked for the expected value of a random variable, you don't go looking for a random variable to take the expected value of. How you know whether you take the expected value of random variable is if there is a random variable to take the expected value of. Even if values come from a distribution, the average of those values is the average of those values, not the average of the distribution they came from.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
If people are talking rigorously, they will explicitly say they want the expected value. You many, however, see people asking for the "mean" or "average", when they really want the expected value, but you can recognize those cases by there being a random variable. For instance, if someone asks "What's the average payout of this slot machine?", the context suggests that you should take the expected value of the distribution of payouts, and not simply take the set of different possible payouts and take the simple average. There could be some ambiguity as to whether "the payout" refers to the random process that pays money out, in which case you should take the expected value of the distribution (population mean), or the actual money paid out, in which case you should take the average of all the actual payouts made by the machine (sample mean), but in the latter case, you still should take the frequency weighted average. The confusion actually comes from notation, where symbols mean different things in two formulas. First, let's take a look at the "probabilistic" definition: $$\mathbb{E}[X] = \frac{1}{n}\sum_{i=1}^n x_i p(x_i)$$ Here the random variable $$X$$ takes $$n$$ distinct values $$x_1,\ldots, x_n$$, each with a probability mass function $$p(x_i)$$. In the "statistical" definition we have an estimate of the expected value, based on the observed values of the random variable $$z_1, \ldots, z_N$$: $$\hat{\mathbb{E}}[X] = \frac{1}{N}\sum_{j=1}^N z_j$$ Notice that I renamed the variables compared to your original formula, so as to avoid the confusion. Here $$N$$ is the number of observations, and $$z_j$$s are the actual observations. For example, if $$X$$ is a random variable representing rolls of a loaded die, then $$n = 6$$ and $${x_i} = \{1, 2, 3, 4, 5, 6\}$$; whereas $$N$$ can be arbitrarily large, and $$z_j$$s will just be a long sequence of random draws from the set of $${x_i}$$s.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
Now, you can rewrite the "statistical" formula by collecting different values of $$z_j$$s into the groups according to which $$x_i$$ they correspond to (for example, first collect all 1s, then all 2s, etc): $$\hat{\mathbb{E}}[X] = \frac{1}{N}\big(x_1\cdot|\{z_j=x_1\}| + \cdots+x_n\cdot|\{z_j=x_n\}|\big) \\ =\frac{1}{N}\sum_{i=1}^n x_i \sum_{j=1}^N\mathbb{1}[z_j=x_i] \\ =\sum_{i=1}^n x_i \hat{p}(x_i)$$ where $$\hat{p}(x_i) = \frac{1}{N}\sum_{j=1}^N \mathbb{1}[z_j=x_i]$$ is the estimate of the probability mass function: the number of times a value $$x_i$$ was encountered in the sample divided by the sample size, i.e. the observed frequency of the value $$x_i$$. Now you can see that the "probabilistic" and "statistical" definitions are actually the same, with the only difference that we replace the theoretical mass distribution function $$p(x)$$ (which may not be known) with the empirical (observed) mass distribution function $$\hat{p}(x)$$. Statement: the "statistic" universe from your question is a partial case of "probabilistic" universe. We need some notation. Suppose that we have a random variable $$\xi$$ in sense of probability theory and $$P(\xi = x_k) = p_k$$, $$0 \le k \le n$$. For example, consider $$\xi$$ which is equal to the number of children in "abstract" random family. Put $$x_k = k$$. Then $$E \xi = \sum_{k=0}^n x_k p_k$$ where $$p_k = P(\xi = x_k)$$, and $$D\xi = E (\xi - E\xi)^2 = \sum_{k=0}^n p_k (x_k - E\xi )^2$$. Moreover, $$P(\xi = 69) > 0$$ (according to Guinness World Records, I am not sure, but let us suppose that it's a record nowadays) and as there was people with $$69$$ children then we may think that for example $$P(80) > 0$$ - it's natural, because $$80$$ children is possible, even although Guinness World Records says that still nobody had $$80$$ children.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
In reality we don't have abstract random family, random mothers, random fathers and random children. We have finite number $$N$$ ( $$10^6 < N < 10^{100}$$) of families, let us number them, and there are $$y_1$$ children in the first family, there are $$y_2$$ children in family number $$2$$, ..., there are $$y_N$$ children in family $$N$$. Analogy: $$\xi$$ corresponds to a fair dice itself and numbers $$y_1$$, $$y_2$$, ... corresponds to dice rolls and have form: $$5, 1, 6, 6, 3, ...$$, these are fixed numbers. Now consider a random value $$\tau$$, which has uniform distribution on $$\{ 1, 2, \ldots, N\}$$. It means that $$P(\tau = k) = \frac{1}{N}$$ for all $$1 \le k \le N$$. Numbers $$y_1, \ldots, y_N$$ are fixed - suppose they are written in some sociological table. Let us consider a random value $$y_{\tau}$$. It's not a number of children in abstract random family. It is a number of children in some real family, if a number $$\tau$$ of such family we chose randomly. Let us find $$E y_{\tau}$$ and $$D y_{\tau}$$. With probability $$\frac{1}N$$ we have $$\tau = k$$ and hence $$y_{\tau} = y_k$$. Thus $$E y_{\tau}=\frac{1}N \sum_{k=1}^N y_k$$ and $$D y_{\tau} = E (y_{\tau} - Ey_{\tau})^2 = E (y_{\tau} - \frac{1}N \sum_{k=1}^N y_k )^2 = \frac{1}N \sum_{k=0}^N (y_k -\frac{1}N \sum_{k=1}^N y_k )^2.$$ Now the correspondence of "statistical universe" and "probabilistic universe" is obvious. Notice that all $$y_i \le 69$$ and $$y_i = 69$$ is the maximum value, but the maximum value of $$\xi$$ is bigger (and not less than $$80$$). So, we have shown that the "statistical universe" is a partial case of "probabilistic" universe. Moreover, there are two moments, when there is randomness: 1. when we go from abstract random family, corresponding to $$\xi$$, to numbers $$y_1, \ldots, y_N$$ from sociological handbook.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
As it was already mentioned, there is analogy: $$\xi$$ corresponds to a fair dice itself and numbers $$y_1$$, $$y_2$$, ... corresponds to dice rolls and have form: $$5, 1, 6, 6, 3, ...$$, these are fixed numbers. 1. [here we suppose that $$y_1, \ldots, y_N$$ are fixed numbers] when we go from the whole sociological handbook to a family with random number $$\tau$$ and see, how much children is there in this family. The number is $$y_{\tau}$$. When from $$y_{\tau}$$ we go to $$E y_{\tau} = \frac{1}N \sum_{k=1}^N y_k$$, we get rid of the second randomness, but we still do not get rid of the first randomness. It's connected with the fact that numbers $$y_k$$ could be another: it could happen, that in some families could be more children, and in some families could be less children, if the circumstances were different. In this sence numbers $$y_i$$ are not fixed: they are r.v. sampled from distribution, corresponding to $$\xi$$. And in this sence $$\frac{1}N \sum_{k=1} y_k$$ is a random value, and we may write limit theorems such as L.L.N.: $$\frac{1}N \sum_{k=1}^N y_k \to E\xi, \text{ }N \to \infty$$ or CLT or LIL. Addition: it was shown that if $$y_1, \ldots, y_N$$ are fixed and $$\tau$$ is random then "probabilistic" mean (expectation) of $$y_{\tau}$$ and "probabilistic" variance of $$y_{\tau}$$ are well known sample mean and sample variance. Hope it will be useful. If you have any questions, you are welcome.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104933824753, "lm_q1q2_score": 0.8558835539435348, "lm_q2_score": 0.8856314783461303, "openwebmath_perplexity": 271.8597849264751, "openwebmath_score": 0.8698630928993225, "tags": null, "url": "https://math.stackexchange.com/questions/3959113/what-is-the-difference-between-expectation-variance-for-statistics-versus-p/3959120" }
# Riemann-Darboux Integrability of Subinterval I'm studying Riemann-Darboux integration. I'm trying to prove the following rather intuitive notion for integrals. Please let me know if you find any errors in this proof, as I'm self-studying this topic. Theorem: Suppose $$f$$ is Riemann-Darboux integrable on $$[a,b]$$. Let $$c\in(a,b)$$. Then, $$f$$ is Riemann-Darboux integrable on the intervals $$[a,c]$$ and $$[c,b]$$. Attempted Proof: Since $$f$$ is Riemann-Darboux integrable on $$[a,b]$$, for arbitrary $$\epsilon>0$$, there exists a partition $$P$$ of $$[a,b]$$ such that $$U(f,P)-L(f,P)<\epsilon$$. Let $$n_p,n_{p*}$$, and $$n_{p'}$$ be the number of partition parts in $$P$$, $$P^*$$, and $$P'$$, respectively. Also, let $$m_i=\inf_{x\in[x_{i-1},x_i]}f(x)$$ and $$M_i=\sup_{x\in[x_{i-1},x_i]}f(x)$$. Consider the partition of $$[a,c]$$ given by $$P^*=P\cap[a,c]$$. Then, $$U(f,P^*)-L(f,P^*)=\sum_{i=1}^{n_{p*}}(M_i-m_i)\Delta x_i\le\sum_{i=1}^{n_{p*}}(M_i-m_i)\Delta x_i+ \sum_{n_{p*}+1}^{n_p}(M_i-m_i)\Delta x_i=\sum_{i=1}^{n_p}(M_i-m_i)\Delta x_i=U(f,P)-L(f,P)<\epsilon$$ Therefore, $$f$$ is integrable on $$[a,c]$$. Next, consider the partition $$[a,b]$$ given by $$P'=P\cap[c,b]$$. Then, $$U(f,P')-L(f,P')=\sum_{i=n_{p*}+1}^{n_{p'}}(M_i-m_i)\Delta x_i\le\sum_{i=1}^{n_{p*}}(M_i-m_i)\Delta x_i+ \sum_{n_{p*}+1}^{n_p}(M_i-m_i)\Delta x_i=\sum_{i=1}^{n_p}(M_i-m_i)\Delta x_i=U(f,P)-L(f,P)<\epsilon$$ Therefore, $$f$$ is integrable on $$[c,b]$$. $$\square$$ Any and all feedback, or alternative proofs are appreciated. I love to see different arguments to expand my skill set.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9956005865761918, "lm_q1q2_score": 0.8558817271067747, "lm_q2_score": 0.8596637433190939, "openwebmath_perplexity": 117.69271930193221, "openwebmath_score": 0.9292044043540955, "tags": null, "url": "https://math.stackexchange.com/questions/3016686/riemann-darboux-integrability-of-subinterval" }
This is essentially fine, although you are tacitly assuming that $$c$$ itself is a member of the partition, $$P$$. This isn't a big deal though - a $$P$$ such that $$U(P,f)-L(P,f)<\epsilon$$ is guaranteed by integrability, and you can always just add $$c$$ to this partition if it is not already there. Namely, if $$P_{c}$$ is this new partition, called a refinement of $$P$$, one must have $$U(P_{c},f)-L(P_{c},f)\leq U(P,f)-L(P,f)<\epsilon$$ and you can proceed as you have done in your proof. An alternative approach is given by Lebesgue's Criterion for Riemann Integrability, which states that Riemann/Darboux Integrability is equivalent to boundedness plus continuity up to a null set (see Wikipedia for a good explanation of null sets). Since your functions is integrable on $$[a,b]$$, it is bounded and continuous up to a null set on $$[a,b]$$, and hence, on $$[a,c]$$ and $$[c,b]$$ as well.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9956005865761918, "lm_q1q2_score": 0.8558817271067747, "lm_q2_score": 0.8596637433190939, "openwebmath_perplexity": 117.69271930193221, "openwebmath_score": 0.9292044043540955, "tags": null, "url": "https://math.stackexchange.com/questions/3016686/riemann-darboux-integrability-of-subinterval" }
# Math Help - Pricing Dice Game 1. ## Pricing Dice Game Stage 1 In this game I roll the dice and which ever number the dice lands on I pay you that amount of money. For example if the dice lands on 4 I pay you £4 pounds. A fair price to play this game should be £3.50 as then the player or myself should not win any money if this game is played infinty times. Stage 2 How much should the fair price be to play the game if their is an option after the first roll to roll again. For example if you roll a 1 first go you will certainly want to roll again as youwill always roll the same or better. However if you were to roll a 6 you would not want the option to roll again as you have already won the maximum amount of money. I priced this option at £3.57, however I been told that is wrong. Can anyone explain why? Calypso 2. Originally Posted by calypso Stage 1 In this game I roll the dice and which ever number the dice lands on I pay you that amount of money. For example if the dice lands on 4 I pay you £4 pounds. A fair price to play this game should be £3.50 as then the player or myself should not win any money if this game is played infinty times. Stage 2 How much should the fair price be to play the game if their is an option after the first roll to roll again. For example if you roll a 1 first go you will certainly want to roll again as youwill always roll the same or better. However if you were to roll a 6 you would not want the option to roll again as you have already won the maximum amount of money. I priced this option at £3.57, however I been told that is wrong. Can anyone explain why? Calypso Choose a $t$ such that if the first roll $a\le t$ roll again otherwise accept the prize of $£ a$. Now for any given $t$ you can work out the value of this game and hence a fair price. The fair price for the game is the maximum fair price. (I make $t=3$ the value of $t$ that maximises the return and a fair price is then $£4.25$ CB
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9504109798251322, "lm_q1q2_score": 0.8558733976546175, "lm_q2_score": 0.9005297874526776, "openwebmath_perplexity": 240.39966382851776, "openwebmath_score": 0.27965545654296875, "tags": null, "url": "http://mathhelpforum.com/statistics/154536-pricing-dice-game.html" }
(I make $t=3$ the value of $t$ that maximises the return and a fair price is then $£4.25$ CB 3. Sorry for the late reply I have just got back from holiday. The text book answer says something similar, ie you should stick if you roll anything greater than 3.50. Therefore the fair price of the option of two rolls is ( 3.5 + 3.5 + 3.5 + 4 + 5 + 6 ) / 6 = 4.25 I can understand why this is the correct answer, however I am still confuced as why I cant get this answer with my method: So I think the fair price of the game is defined by: Sum of all possible payouts / No. of possible payout So therefore the possible outcomes of the game are 1 -> 1 1 -> 2 1 -> 3 1 -> 4 1 -> 5 1 -> 6 2 -> 1 2 -> 2 2 -> 3 2 -> 4 2 -> 5 2 -> 6 3 -> 1 3 -> 2 3 -> 3 3 -> 4 3 -> 5 3 -> 6 4 5 6 Therefore fair price should equal 78/21 = £3.71? Thanks again Calypso 4. Originally Posted by calypso I am still confuced as why I cant get this answer with my method: So I think the fair price of the game is defined by: Sum of all possible payouts / No. of possible payout So therefore the possible outcomes of the game are 1 -> 1 1 -> 2 1 -> 3 1 -> 4 1 -> 5 1 -> 6 2 -> 1 2 -> 2 2 -> 3 2 -> 4 2 -> 5 2 -> 6 3 -> 1 3 -> 2 3 -> 3 3 -> 4 3 -> 5 3 -> 6 4 5 6 The reason this goes wrong is that those outcomes are not all equally probable. Each of the final three outcomes (4, 5, 6) occurs with a probability of 1/6. But the initial outcomes 1, 2, 3 are subdivided into six subcases, each of which occurs with a probability of only 1/36. Multiplying each outcome by its probability, you get $\tfrac3{36}(1+2+3+4+5+6) + \tfrac16(4+5+6) = \tfrac{17}4 = 4.25.$ 5. Originally Posted by calypso Sorry for the late reply I have just got back from holiday. The text book answer says something similar, ie you should stick if you roll anything greater than 3.50. Therefore the fair price of the option of two rolls is ( 3.5 + 3.5 + 3.5 + 4 + 5 + 6 ) / 6 = 4.25
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9504109798251322, "lm_q1q2_score": 0.8558733976546175, "lm_q2_score": 0.9005297874526776, "openwebmath_perplexity": 240.39966382851776, "openwebmath_score": 0.27965545654296875, "tags": null, "url": "http://mathhelpforum.com/statistics/154536-pricing-dice-game.html" }
( 3.5 + 3.5 + 3.5 + 4 + 5 + 6 ) / 6 = 4.25 I can understand why this is the correct answer, however I am still confuced as why I cant get this answer with my method: I don't like their decision rule, not that it is wrong, but because it refers to something not in the sample space (that is "anything greater than 3.50" when it could have said anything greater than 3 or even anything greater than or equal to 4). CB 6. Great, thanks everyone for your help Calypso
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9504109798251322, "lm_q1q2_score": 0.8558733976546175, "lm_q2_score": 0.9005297874526776, "openwebmath_perplexity": 240.39966382851776, "openwebmath_score": 0.27965545654296875, "tags": null, "url": "http://mathhelpforum.com/statistics/154536-pricing-dice-game.html" }