text stringlengths 1 2.12k | source dict |
|---|---|
and easy to use. This tutorial demonstrates how to evaluate integrals using the TI-89, TI-92+, or Voyage 200 graphing calculators. If it’s not clear what the y. How to use the Double Integral Calculator. Classical integration theorems of vector calculus Math 6B. In your integral, use theta, rho, and phi for 0, ρ and φ, as needed. The int function can be used for definite integration by passing the limits over which you want to calculate the integral. Newton's Method Calculator. If expr is a constant, then the default integration variable is x. Free Summation Calculator. That gives the upper limit z = (3 -y)/3. Changing the order of integration of a triple integral blackpenredpen. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, How to calculate limits in triple integral? Ask Question Asked 3 years, 8 months ago. 1 Functions and Their Representations 1. April 25, 2007 Teaching Assistant: Time Limit: 1 hour Signature: This exam contains 7 pages (including this cover page) and 6 problems. I am only interested in the correct limits of integration. V = \iiint\limits_U {\rho d\rho d\varphi dz}. Homework Equations $$x^{2}+y^{2}+z^{2}=a^{2}$$ : Equation for a sphere of radius "a" centered on the origin. Integral online. Integration over surfaces, properties, and applications of integrals. Definite integrals can also be used in other situations, where the quantity required can be expressed as the limit of a sum. The integral calculator gives chance to count integrals of functions online free. and integral tables (D)Applications of the integral to nding area and volume (E)Graphs and integrals with polar coordinates and parametric curves (F)Vector geometry and vector arithmetic in two and three dimensions IV. How are triple integrals in rectangular coordinates evaluated? How are the limits of integration determined? Give an example. The region D consists of the points (x,y,z) with x^2+y^2+z^2<=4 and x^2+y^2<=1 and z>=0. Here is a simple. Summary | {
"domain": "lotoblu.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401446964613,
"lm_q1q2_score": 0.8548786884085159,
"lm_q2_score": 0.8633916047011594,
"openwebmath_perplexity": 849.5232301045739,
"openwebmath_score": 0.9671174883842468,
"tags": null,
"url": "http://lotoblu.it/ixah/triple-integral-limit-calculator.html"
} |
of the points (x,y,z) with x^2+y^2+z^2<=4 and x^2+y^2<=1 and z>=0. Here is a simple. Summary : The integral function calculates online the integral of a function between two values. I Examples: Changing the order of integration. My problem isn't the integration process but just to determine what the limits are. This integral is improper at infinity only, and for large t we know that t3 is the dominant part. Triple integrals in Cartesian coordinates (Sect. Elliott Jacobs On Wednesday, March 4, you saw how to set up triple and calculate triple integrals. integration of trigonometric integrals Recall the definitions of the trigonometric functions. D 0 0 0 0 0 0 Inner integral: r3zh = hr3. Changes of variable can be made using Jacobians in much the same way as for double integrals. I Triple integrals in arbitrary domains. What is a cross product? A cross product, also known as a vector product, is a mathematical operation in which the result of the cross product between 2 vectors is a new vector that is perpendicular to both vectors. The sum on the right is called a Riemann sum, and fis said to be integrable if the limit of Riemann sums exists. In general integrals in spherical coordinates will have limits that depend on the 1 or 2 of the variables. To approximate a volume in three dimensions, we can divide the three-dimensional region into small rectangular boxes, each$\Delta x\times\Delta y\times\Delta z. Because if your integration order takes care of Z first, i. about mathwords. When there are limits, and we need to use U-Substitution, there are a few things we need to keep in mind:. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. First, we must convert the bounds from Cartesian to cylindrical. double integral is defined as the limit of sums. Second, we find a fast way to compute it. Use double (or triple) integrals to calculate the average value of a function in some | {
"domain": "lotoblu.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401446964613,
"lm_q1q2_score": 0.8548786884085159,
"lm_q2_score": 0.8633916047011594,
"openwebmath_perplexity": 849.5232301045739,
"openwebmath_score": 0.9671174883842468,
"tags": null,
"url": "http://lotoblu.it/ixah/triple-integral-limit-calculator.html"
} |
to compute it. Use double (or triple) integrals to calculate the average value of a function in some region. Now here the solid is enclosed by the planes and the surface. This integral is improper at infinity only, and for large t we know that t3 is the dominant part. Define the triple integral of a function f(x, y, z) over a bounded reglon in space. Integration by parts formula: ? u d v = u v-? v d u. Changing the order of integration in triple integrals Triple Integrals, Changing the Order of Integration,. The integral is equal to the area of. There will be six different orders of evaluating the triple iterated integrals. • Evaluate double integrals over general regions. How are triple integrals in rectangular coordinates evaluated? How are the limits of integration determined? Give an example. Equation Solver solves a system of equations with respect to a given set of variables. advanced algebra. • Evaluate double integrals over general regions. The common way that this is done is by df / dx and f'(x). This gives us a ray going out from the origin. Because if your integration order takes care of Z first, i. Free indefinite integral calculator - solve indefinite integrals with all the steps. Triple Integral Calculator. BYJU’S online triple integral calculator tool makes the calculation faster, and it displays the integrated value in a fraction of seconds. Direct application of the fundamental theorem of calculus to find an antiderivative can be quite difficult, and integration by substitution can help simplify that task. Now this last limit is clearly one (divide top and bottom by t3, or use continuity of the square root to move the limit inside the radical). Multiple Integrals -- First Example: Degenerate Double Integral | # Following Examples are Variations of Examples from Math 210 Calculus III | # Textbook Multivariable Calculus, Third Edition, by James Stewart | # from Section 13. There are examples of valid and invalid expressions at the bottom of the page. | {
"domain": "lotoblu.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401446964613,
"lm_q1q2_score": 0.8548786884085159,
"lm_q2_score": 0.8633916047011594,
"openwebmath_perplexity": 849.5232301045739,
"openwebmath_score": 0.9671174883842468,
"tags": null,
"url": "http://lotoblu.it/ixah/triple-integral-limit-calculator.html"
} |
| # from Section 13. There are examples of valid and invalid expressions at the bottom of the page. Although we define triple integrals using a Riemann sum, we usually evaluate triple integrals by turning them into iterated integrals involving three single integrals. Volume integrals are especially important in physics for many applications, for example, to calculate flux densities. index: click on a letter. Integral Calculator If you were looking for a way to calculate the Integral value of a set of mumbers, then the Integral calculator is exactly what you need. Logo, Boxshot & ScreenShot. Note, that integral expression may seems a little different in inline and display math mode - in inline mode the integral symbol and the limits are compressed. Triple integrals represent a concept similar to double integrals, but the area of integration in this case is not an area but rather a three-dimensional surface. w18vzo0h0a1v, mbww8uzgnawpy, znf5c4g7xm, dgi7r1rjh9z27, ft1zvqto9n, mjns5gei9ajw, 3ieeobpihypu, ic7qbh0jwfax5u, cca1zssruow5akg, ub9elvwpwan0, mjhsah3l26ig, 3nzxftdmcl, 66w71h5ps5g6v, 9hhfespigiz, nrxg2fbfm0hk5q, pqo3x9dh14fha, 48ex9wzon2zm0, hqi5ktwpu5zt8o, yrkh4cnobb8p, k8uow4n5h4vf93, 1kawxmi0b5e, s41403dnjohp9y5, rm66yls5p4pnt1, qx2cse22m2re, 8xhxnw2b2but, e3zgq6dpjuklf, ulun0bgh925s, 43762ott2p8v3, jokxr64ksr, v1iudkao69yoswh, djapj57k7dq57, ha7mkbkreg6kyq, ezjwis8sb609p1, hliis48yjw98n, fwxvxg26ra0gmz | {
"domain": "lotoblu.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401446964613,
"lm_q1q2_score": 0.8548786884085159,
"lm_q2_score": 0.8633916047011594,
"openwebmath_perplexity": 849.5232301045739,
"openwebmath_score": 0.9671174883842468,
"tags": null,
"url": "http://lotoblu.it/ixah/triple-integral-limit-calculator.html"
} |
# Expressing $\Bbb N$ as an infinite union of disjoint infinite subsets.
The title says it. I thought of the following: we want $$\Bbb N = \dot {\bigcup_{n \geq 1} }A_n$$
We pick multiples of primes. I’ll add $1$ in the first subset. For each set, we take multiples of some prime, that hasn’t appeared in any other set before. Then \begin{align} A_1 &= \{1, 2, 4, 6, 8, \cdots \} \\ A_2 &= \{3, 9, 15, 21, 27, \cdots \} \\ A_3 &= \{5, 25, 35, 55, \cdots \} \\ A_4 &= \{7, 49, 77, \cdots \} \\ &\vdots \end{align}
I’m heavily using the fact that there are infinite primes. I think these sets will do the job. Can someone check if this is really ok? Also, it would be nice to know how I could express my idea better, instead of that hand-waving. Alternate solutions are also welcome. Thank you!
Edit: the subsets must be also infinite.
#### Solutions Collecting From Web of "Expressing $\Bbb N$ as an infinite union of disjoint infinite subsets."
Here is another way to do it.
Let $A_{i}$ consist of all the numbers of the form $2^im$ where $2\nmid m$. That is, $A_i$ consists of all the numbers that have exactly a factor of $2^i$ in them. So
\begin{align} A_0 &= \{1,3,5,7,9,11, \dots\}\\ A_1 &= \{2, 6 =2^1\cdot 3, 10 = 2^1\cdot 5, 14 = 2^1\cdot 7, \dots\}\\ A_2 &= \{4 = 2^2, 12=2^2\cdot 3, 20=2^2\cdot 5, \dots\}\\ A_3 &= \{8=2^3, 24=2^3\cdot 3, 40=2^3\cdot 5, \dots \}\\ &\vdots \end{align}
In general $A_i = \{2^im: p\nmid m\}$. You can of course pick any other prime instead of $2$.
I like @Thomas’s answer best, but I would have enumerated $\mathbb N\times\mathbb N$ and then taken the inverse images of the separate columns for the subsets.
This is an alternate solution. | {
"domain": "bootmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363708905831,
"lm_q1q2_score": 0.8548421023077305,
"lm_q2_score": 0.8670357718273068,
"openwebmath_perplexity": 398.47478087695526,
"openwebmath_score": 0.9999896287918091,
"tags": null,
"url": "http://bootmath.com/expressing-bbb-n-as-an-infinite-union-of-disjoint-infinite-subsets.html"
} |
This is an alternate solution.
Let $\alpha$ and $\beta$ be any pair of irrational numbers such that
$$1 < \alpha < 2 < \beta\quad\text{ and }\quad \frac{1}{\alpha} + \frac{1}{\beta} = 1.$$
The two sequences
$\displaystyle\;\left\lfloor\alpha k\right\rfloor\;$ and
$\displaystyle\;\left\lfloor\beta k\right\rfloor\;$, $k \in \mathbb{Z}_{+}$
are called Beatty sequence and it is known they form a partition of $\mathbb{Z}_{+}$.
Define two functions $\tilde{\alpha}, \tilde{\beta} : \mathbb{Z}_{+} \to \mathbb{Z}_{+}$ by
$$\tilde{\alpha}(k) = \left\lfloor\alpha k\right\rfloor \quad\text{ and }\quad \tilde{\beta}(k) = \left\lfloor\beta k\right\rfloor$$
We have
$$\mathbb{Z}_{+} = \tilde{\alpha}(\mathbb{Z}_{+}) \uplus \tilde{\beta}(\mathbb{Z}_{+})$$ where $\uplus$ stands for disjoint union.
Replace the rightmost $\mathbb{Z}_{+}$ recursively by this relation, we get
\begin{align} \mathbb{Z}_{+} &= \tilde{\alpha}(\mathbb{Z}_{+}) \uplus \tilde{\beta}(\mathbb{Z}_{+})\\ &= \tilde{\alpha}(\mathbb{Z}_{+}) \uplus \tilde{\beta}(\tilde{\alpha}(\mathbb{Z}_{+})) \uplus \tilde{\beta}(\tilde{\beta}(\mathbb{Z}_{+})))\\ &= \tilde{\alpha}(\mathbb{Z}_{+}) \uplus \tilde{\beta}(\tilde{\alpha}(\mathbb{Z}_{+})) \uplus \tilde{\beta}(\tilde{\beta}(\tilde{\alpha}(\mathbb{Z}_{+}))) \uplus \tilde{\beta}(\tilde{\beta}(\tilde{\beta}(\mathbb{Z}_{+}))))\\ &\;\vdots \end{align}
As a consequence, if one define a sequence of subsets $A_1, A_2, \ldots \subset \mathbb{Z}_{+}$ recursively by
$$A_1 = \tilde{\alpha}(\mathbb{Z}_{+}) \quad\text{ and }\quad A_n = \tilde{\beta}(A_{n-1}), \quad\text{ for } n > 1,$$
these subsets will be pairwise disjoint. It is clear all these $A_n$ are infinite sets.
Since $\beta > 2$, we have | {
"domain": "bootmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363708905831,
"lm_q1q2_score": 0.8548421023077305,
"lm_q2_score": 0.8670357718273068,
"openwebmath_perplexity": 398.47478087695526,
"openwebmath_score": 0.9999896287918091,
"tags": null,
"url": "http://bootmath.com/expressing-bbb-n-as-an-infinite-union-of-disjoint-infinite-subsets.html"
} |
$$\tilde{\beta}(k) = \left\lfloor \beta k \right\rfloor > \beta k – 1 \ge (\beta – 1) k\quad\text{ for all } k \in \mathbb{Z}_{+}$$
This implies
$$\tilde{\beta}^{\circ\,\ell}(k) = \underbrace{\tilde{\beta}(\tilde{\beta}( \cdots \tilde{\beta}(k)))}_{\ell \text{ times}} > (\beta-1)^\ell k \ge (\beta-1)^\ell \quad\text{ for all } k, \ell \in \mathbb{Z}_{+}$$
As a result,
$$\bigcap_{\ell=1}^\infty \tilde{\beta}^{\circ\,\ell}(\mathbb{Z}_{+}) = \emptyset \quad\implies\quad \mathbb{Z_{+}} = \biguplus_{k = 1}^\infty A_k$$
i.e. $\mathbb{Z}_{+}$ is an infinite disjoint union of infinite sets $A_k$. Since there are uncountable choices for $\alpha$, there are uncountable ways of such infinite disjoint unions.
For a concrete example, let $\alpha = \phi, \beta = \phi^2$ where $\phi$ is the golden mean, we get something like
$$\begin{array}{rll} \mathbb{Z}_{+} = & \{\; 1,3,4,6,8,9,11,12,14,16,17,19,\ldots\;\}\\ \uplus & \{\; 2,7,10,15,20,23,28,31,36,41,\ldots\;\}\\ \uplus & \{\; 5,18,26,39,52,60,73,81,94,\ldots\;\}\\ \uplus &\{\; 13,47,68,\ldots\;\}\\ \vdots\; & \end{array}$$
If you follow the rule that you only take the multiples that didn’t show up already, then you’re fine, since by construction you’ll be making all the subsets disjoint and by the Fundamental Theorem of Arithmetic every element will be in some $A_i$.
An example of simple infinite disjoint union would be $A_i=\{i\}$ and then $\mathbb{N}=\bigcup_{i=0}^\infty{A_i}$.
With edit: A simple way to split up $\mathbb{N}$ into a disjoint union of infinite subsets is to start with $A_0=\{0,2,4,\ldots\}$, and then let $A_1$ be every other element of $\mathbb{N}\setminus A_0$ (i.e. $\{1,5,9,13,\ldots\}$). In general, let $A_n$ be the set containing “every other element” of the set $\mathbb{N}\setminus \bigcup_{i=0}^{n-1}{A_n}$. | {
"domain": "bootmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363708905831,
"lm_q1q2_score": 0.8548421023077305,
"lm_q2_score": 0.8670357718273068,
"openwebmath_perplexity": 398.47478087695526,
"openwebmath_score": 0.9999896287918091,
"tags": null,
"url": "http://bootmath.com/expressing-bbb-n-as-an-infinite-union-of-disjoint-infinite-subsets.html"
} |
# A natural proof of the Cauchy-Schwarz inequality
Most of the proofs of the Cauchy-Schwarz inequality on a pre-Hilbert space use a fact that if a quadratic polynomial with real coefficients takes positive values everywhere on the real line, then its discriminant is negative(e.g. Conway: A course in functional analysis). I think this is somewhat tricky. Moreover I often forget its proof when the pre-Hilbert space is defined over the field of complex numbers. Is there a more natural proof (hence it's easy to remember) which is based on a completely different idea?
• To someone who voted to close, would you please explain the reason for the vote? – Makoto Kato Jul 5 '13 at 5:59
• You might want to browse the first chapter of The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities by J. Michael Steele. – Did Jul 5 '13 at 6:26
• I'm not sure about what could qualify as "natural" or "more natural" proof for you, but in this case I can't think, off the top of my head, of anything *simpler", basic and straightforward as working with a quadratic's discriminant: this is Junior High School stuff! – DonAntonio Jul 5 '13 at 8:45
• @DonAntonio As I wrote, I don't think the complex pre-Hilbert space case is not so straightforward. – Makoto Kato Jul 5 '13 at 9:01
• A question which has 3 upvotes, 2 favorites and a 6 upvoted answer should not be closed. Please reset the close votes. – Makoto Kato Jul 5 '13 at 19:24
Recall the Pythagorean theorem: If $u_1, \cdots u_n$ are pairwise orthogonal, then $$\| u_1 + \cdots u_n \|^2 = \|u_1\|^2 + \cdots + \| u_n \|^2.$$
I want to use this to tell us something about two non-zero vectors $u$ and $v,$ but they aren't necessarily orthogonal. So consider the projection of $u$ onto the plane of vectors orthogonal to $v:$ $$w = u - \frac{ \langle u,v \rangle}{\|v\|^2} v.$$ This is certainly orthogonal to $v,$ and the Pythagorean theorem applied to $w$ and $v$ gives the Cauchy-Schwarz inequality. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363725435203,
"lm_q1q2_score": 0.8548420952723014,
"lm_q2_score": 0.8670357632379241,
"openwebmath_perplexity": 223.72300411722878,
"openwebmath_score": 0.8864595293998718,
"tags": null,
"url": "https://math.stackexchange.com/questions/436559/a-natural-proof-of-the-cauchy-schwarz-inequality/436642"
} |
• This is excellent. Thanks. By the way, I think $<v, u>$ is a typo($<v, v>$). – Makoto Kato Jul 5 '13 at 9:19
• @MakotoKato Thank you for spotting that. Also, I find using \langle , \rangle for inner products is more pleasing to the eye than $<$ and $>.$ – Ragib Zaman Jul 5 '13 at 14:14
• @RagibZaman You're welcome. I just didn't know how to write $\langle, \rangle$ instead of < and >. By the way again, I think $\|v||$ should be squared or replaced by $\langle v, v \rangle$. Regards. – Makoto Kato Jul 5 '13 at 19:19
• I fail to see how you obtain the CS inequality in your last paragraph. Would you explain that? – Martin Argerami Jul 5 '13 at 20:42
• @MartinArgerami Let $u_1 = w$, $u_2 = \frac{ \langle u,v \rangle}{\langle v, v \rangle} v$. Then apply the Pythagorean formula $\| u_1 + u_2\|^2 = \|u_1\|^2 + \| u_2 \|^2 \ge \|u_2\|^2$. Regards. – Makoto Kato Jul 6 '13 at 0:41
There is also an approach by "amplification" which is really cool. Also the exact same trick works to prove Hölder's inequality and is generally a very important principle for improving inequalities.
It goes like this: We start out with $$\langle a-b,a-b\rangle\ge 0$$ for $a,b$ in your inner product space, and $a\not=0$, $b\not=0$. This implies $$2\langle a,b\rangle\le \langle a, a\rangle + \langle b, b\rangle$$ Now notice that the left hand side is invariant under the scaling $a\mapsto \lambda a$, $b\mapsto \lambda^{-1}b$ for $\lambda>0$. This gives $$2\langle a,b\rangle \le \lambda^2 \langle a,a\rangle + \lambda^{-2}\langle b, b\rangle$$ Now look at the right hand side as a function of the real variable $\lambda$ and find the optimal value for $\lambda$ using calculus (set the derivative to $0$):
$$\lambda^2=\sqrt{\frac{\langle b,b\rangle}{\langle a,a\rangle}}$$
Plugging this value in, we obtain
$$2\langle a,b\rangle\le \sqrt{\langle a,a\rangle}\sqrt{\langle b,b\rangle}+\sqrt{\langle a,a\rangle}\sqrt{\langle b,b\rangle}$$
i.e. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363725435203,
"lm_q1q2_score": 0.8548420952723014,
"lm_q2_score": 0.8670357632379241,
"openwebmath_perplexity": 223.72300411722878,
"openwebmath_score": 0.8864595293998718,
"tags": null,
"url": "https://math.stackexchange.com/questions/436559/a-natural-proof-of-the-cauchy-schwarz-inequality/436642"
} |
i.e.
$$\langle a,b\rangle\le\sqrt{\langle a,a\rangle}\sqrt{\langle b,b\rangle}$$
Notice how we took a trivial observation and "optimized" the expression by exploiting scaling invariance.
• As explained (and expanded) by Terence Tao on his blog. – Did Jul 5 '13 at 13:08
• Could your proof be modified to show that the equality holds iff $a$ is proportional to $b$? I've been thinking a while on it, but I haven't figured anything out. – jinawee Oct 1 '14 at 19:54
• @jinawee: Yes, the only inequality in this proof is $\langle \mu a-b,\mu a-b\rangle\ge 0$ for an optimal constant $\mu$ depending on $a,b$. Equality implies $b=\mu a$. – J.R. Oct 2 '14 at 7:17
The inequality $| \langle a, b \rangle | \leq \| a \| \| b \|$ can be rewritten as $$| \langle a, \frac{b}{\|b\|} \rangle | \leq \| a \|$$ (assuming $b \neq 0$).
On the left we see the component of $a$ on the unit vector $u = \frac{b}{\| b \|}$. On the right we have the norm of $a$. Of course the norm of $a$ is larger than the component of $a$ on $u$, this is very intuitive. To make this into a rigorous proof, we merely have to write $a = \langle a, u \rangle u + v$, note that $v \perp u$, and use the pythagorean theorem.
Another approach is to start with the inequality $$0 \leq \| a - \text{proj}_b(a) \|^2.$$
Just expand the right hand side and Cauchy-Schwarz pops out.
This is not exactly an answer but an explanation for the idea behind the Ragib Zaman's answer.
Let $K$ be the field of real numbers or the field of complex numbers. Let $E$ be a pre-Hilbert space over $K$.
Let $x, y$ be elements of $E$ such that $\langle x, y \rangle = 0$. Then $\|x + y\|^2 = \langle x+y, x+ y\rangle = \langle x, x\rangle + \langle x, y \rangle + \langle y, x\rangle + \langle y,y\rangle = \|x\|^2 + \|y\|^2$.
Let $x, y$ be elements of $E$ such that $\langle x - y, y \rangle = 0$. Then, by the above formula, $\|x\|^2 = \|x - y\|^2 + \|y\|^2$. Hence $\|x\| \ge \|y\|$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363725435203,
"lm_q1q2_score": 0.8548420952723014,
"lm_q2_score": 0.8670357632379241,
"openwebmath_perplexity": 223.72300411722878,
"openwebmath_score": 0.8864595293998718,
"tags": null,
"url": "https://math.stackexchange.com/questions/436559/a-natural-proof-of-the-cauchy-schwarz-inequality/436642"
} |
Finally let $u, v$ be non-zero elements of $E$. Let $t$ be an element of $K$ such that $\langle u - tv, v \rangle = 0$. $t$ must be $\frac{ \langle u,v \rangle}{\langle v, v\rangle}$ since $v \neq 0$. Then $\|u\| \ge \|tv\|$ by the previous inequality. This leads to the Cauchy-Schwarz inequality immediately.
Here's my favorite proof, mainly because it's nicely symmetric, easy to remember and not impossible to come up with (the main trick is that $2=1+1$):
We want to prove $$|\langle x,y\rangle|\le\|x\|\|y\|$$ This is linear in $x$ or $y$ and obviously holds for $x=0$ or $y=0$. Therefore without loss of generality we can suppose that $\|x\|=\|y\|=1$: $$|\langle x,y\rangle|\le\|x\|\|y\|\Longleftrightarrow|\langle x,y\rangle|\le1\Longleftrightarrow2|\langle x,y\rangle|\le2\Longleftrightarrow2|\langle x,y\rangle|=\|x\|^2+\|y\|^2$$
Let $u\in\mathbb C$ be a complex unit (i.e. $|u|=1$), then \begin{align}0\le\|x-uy\|^2&=\langle x-uy,x-uy\rangle=\langle x,x\rangle-u\langle x,y\rangle-\overline{u\langle x,y\rangle}+\langle y,y\rangle\\&=\|x\|^2+\|y\|^2-2\,\text{Re}(u\langle x,y\rangle)\end{align} Now just set $u$ so that $\text{Re}(u\langle x,y\rangle)=|\langle x,y\rangle|$ and you are done.
Of course in the real case you can just expand $0\le\|x\pm y\|^2$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363725435203,
"lm_q1q2_score": 0.8548420952723014,
"lm_q2_score": 0.8670357632379241,
"openwebmath_perplexity": 223.72300411722878,
"openwebmath_score": 0.8864595293998718,
"tags": null,
"url": "https://math.stackexchange.com/questions/436559/a-natural-proof-of-the-cauchy-schwarz-inequality/436642"
} |
# Is this a valid approach to solving the inequality $\frac{1}{x} < x < 1$?
I have been given the inequality $\frac{1}{x} < x < 1$ and have been told to find the values of $x$ which satisfy this inequality, and I have also been told to find these values using a case-by-case approach. I'd like to know whether my reasoning is valid.
Case 1: $x=0$
In this case we find that the value of $\frac1x$ is undefined, so we know that $x \neq 0$.
Case 2: $0<x<1$
Given the inequality $\frac{1}{x} < x < 1$ and that $x$ lies in the range $0 < x < 1$, we find that $1<x^2$. Therefore, $x>1$ or $x<-1$. However, $x\not>1$ and in this case $x\not<-1$ since we are looking at the case $0<x<1$. Thus we conclude that no values of $x$ in the range $0<x<1$ satisfy the original inequality.
Case 3: $x<0$
From the original equality and the fact that we know that $x<1$ we find:
$$\frac1x < x$$
$$\implies 1 > x^2$$ (since $x<0$) $$\implies -1<x<1$$ but we know that $x$ cannot lie in the range $0<x<1$ so we have that $-1<x$ and that $x<0$ and combining these inequalities we have that $-1<x<0$.
This argument seems to be supported by looking at the graph, but I am unsure whether all the steps I have made are valid and whether this is what is meant by a 'case analysis'.
• This is absolutely fine and there is no part of the solution that is wrong. – Prakhar Nagpal Jul 12 '18 at 16:00
• I think the argument is good! ${}{}{}{}{}{}{}$ – Andres Mejia Jul 12 '18 at 16:01
• Thank you for the feedback. – Benjamin Jul 12 '18 at 16:02 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98593637543616,
"lm_q1q2_score": 0.8548420893117379,
"lm_q2_score": 0.8670357546485407,
"openwebmath_perplexity": 104.81233275808343,
"openwebmath_score": 0.821780264377594,
"tags": null,
"url": "https://math.stackexchange.com/questions/2848788/is-this-a-valid-approach-to-solving-the-inequality-frac1x-x-1"
} |
Overall, I would say that the argument is spot on! Just a couple comments you might want to consider. First of all, to be super rigorous in your answer, you might mention that clearly we know $x \ngeq 1$ so we need not consider that case. Another thing, I felt that your argument via contradiction for Case 2 was a bit lengthy there at the end. Once you reached the point $1 < x^2$ you could have just said that since we have $0 < x < 1$ (via assumption) and no such value squared is greater than 1, we have a contradiction and are done. The whole considering $x > 1$ or $x < -1$ is just overkill.
Also, that is exactly what it means by a case analysis, or case-by-case approach. Use cases to consider every possible value of $x$, and see what happens. Using cases is actually a very common proof technique! | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98593637543616,
"lm_q1q2_score": 0.8548420893117379,
"lm_q2_score": 0.8670357546485407,
"openwebmath_perplexity": 104.81233275808343,
"openwebmath_score": 0.821780264377594,
"tags": null,
"url": "https://math.stackexchange.com/questions/2848788/is-this-a-valid-approach-to-solving-the-inequality-frac1x-x-1"
} |
# A set of elements in a reduced unity ring
Let $$(A,+,\cdot)$$ be a unity ring with the property that if $$x \in A$$ and $$x^2=0$$ then $$x=0$$. Consider the set $$M=\{a\in A | a^3=a\}$$. Prove that:
a) $$2a\in Z(A)$$, $$\forall a\in M$$, where $$Z(A)$$ denotes the centre of the ring $$A$$;
b) $$ab=ba$$, $$\forall a,b\in M$$.
My attempts revolved around the fact that an idempotent element in a reduced ring is central.
So, since for $$a\in M$$ we have that $$(a^2)^2=a^2$$, it follows that $$a^2\in Z(A)$$, $$\forall a\in M$$.
The next thing I wanted to use in order to solve a) was that $$Z(A)$$ is a subring of $$A$$, so if I had proved that $$(a+1)^2 \in Z(A)$$, $$\forall a\in M$$, then we would have reached the desired conclusion. However, I couldn't prove this and I honestly doubt that it is true.
Another idea that I had was to prove that $$M$$ is a subring of $$A$$. Of course, this didn't work out because I cannot even prove that $$M$$ is closed under addition. Again, I don't know if this is true and it most likely isn't.
As for b), I think that a) should be of use, but I don't know how. It is a well-known problem that a ring with $$x^3=x$$ for any $$x$$ in that ring is commutative, but since $$(M,+,\cdot)$$ is almost definitely not a ring, this doesn't help.
EDIT: Is there any chance that this question is simply wrong? I tended to believe this before asking it here too, but since nobody has made any progress on it until now I am even more inclined to think so. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363750229256,
"lm_q1q2_score": 0.8548420872597318,
"lm_q2_score": 0.8670357529306639,
"openwebmath_perplexity": 174.67822198460573,
"openwebmath_score": 0.9266897439956665,
"tags": null,
"url": "https://math.stackexchange.com/questions/3426080/a-set-of-elements-in-a-reduced-unity-ring"
} |
• If you honestly doubt that $(a+1)^2 \in Z(a)$ for all $a \in M$, then you honestly doubt that part (a) is true, since if part (a) is true (and since you've already noted $a^2 \in Z(a)$ for all $a \in M$), you get $(a+1)^2 = a^2+2a+1 \in Z(a)$ for all $a \in M$. – mathworker21 Nov 9 at 18:50
• @mathworker21 I agree with you. But since I have spent hours and hours trying to prove that $(a+1)^2 \in Z(A)$ for all $a\in M$ and nothing worked out, then I am inclined to believe that this statement may actually be wrong. Yet, since I cannot provide a counterexample, here I am, still hoping that someone better than me at rings will solve it.... – Math Guy Nov 9 at 21:34
• I don't know. I don't think the proof that any ring with $x^3 = x$ for all $x$ implies the ring is commutative is that easy... I could imagine spending hours failing to find that proof. By the way, where did you find this problem? – mathworker21 Nov 9 at 21:35
• @mathworker21 I know the "classical" one where $x^3=x$ for all $x$ holds in a ring, but since here we do not have a ring it doesn't really help I think. The problem is from a magazine from my country. – Math Guy Nov 9 at 21:37
• you missed my point. my point was that working on these kinds of problems for hours and failing doesn't mean they are false. (Of course that is always true, but I find the statement more meaningful here). The evidence/example I gave was the $x^3=x$ problem. For that problem, I could imagine working on it for several hours without finding the solution. And of course that problem is true. – mathworker21 Nov 9 at 21:54
## 1 Answer
We show $$M \subseteq Z(A)$$. This immediately gives (a) and (b).
Lemma 1: $$yx = 0 \implies xzy = 0$$ for any $$z$$.
Proof: $$(xzy)(xzy) = xz(yx)zy = 0$$.
Lemma 2: $$x^2 = x$$ implies $$x \in Z(A)$$.
Proof: For any $$y \in A$$, a short computation shows $$(xy-xyx)(xy-xyx) = 0 = (yx-xyx)(yx-xyx)$$.
Lemma 3: $$a \in M \implies a^2 \in Z(A)$$.
Proof: $$(a^2)^2 = a^4 = a^2$$, so use Lemma 2. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363750229256,
"lm_q1q2_score": 0.8548420872597318,
"lm_q2_score": 0.8670357529306639,
"openwebmath_perplexity": 174.67822198460573,
"openwebmath_score": 0.9266897439956665,
"tags": null,
"url": "https://math.stackexchange.com/questions/3426080/a-set-of-elements-in-a-reduced-unity-ring"
} |
Lemma 3: $$a \in M \implies a^2 \in Z(A)$$.
Proof: $$(a^2)^2 = a^4 = a^2$$, so use Lemma 2.
Claim: For any $$a \in M$$, $$a \in Z(A)$$.
Proof: Since $$(a-1)[a(a+1)]=0$$, Lemma 1 implies that for any $$b \in A$$, $$0 = a(a+1)b(a-1) = (a^2b+ab)(a-1).$$ Also, $$a(a+1)(a-1)=0$$ implies $$0 = ba(a+1)(a-1) = (ba^2+ba)(a-1) = (a^2b+ba)(a-1),$$ where the last equality used Lemma 3. Subtracting gives: (1) $$0 = (ba-ab)(a-1)$$. The exact same argument shows: (2) $$0 = (a-1)(ba-ab)$$. (1) immediately implies $$0 = (ba-ab)(a-1)b = (ba-ab)(ab-b)$$, and (2) with Lemma 1 implies $$0 = (ba-ab)b(a-1) = (ba-ab)(ba-b)$$. Subtracting the two results gives $$(ba-ab)^2 = 0$$.
• Thank you ! Could you tell me how you came up with this solution? – Math Guy Nov 10 at 11:23
• @MathGuy well, I played around with it for a while, so I exhausted many different approaches. Then I eventually stumbled upon something like Lemma $1$, first in the form of $0 = a(a+1)b(a-1)$ and realized it started giving equations I hadn't seen/derived before. So I knew I had something good. Then it was just a matter of finishing up, which was pretty easy (the proof of the claim is rather short). – mathworker21 Nov 10 at 13:02 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363750229256,
"lm_q1q2_score": 0.8548420872597318,
"lm_q2_score": 0.8670357529306639,
"openwebmath_perplexity": 174.67822198460573,
"openwebmath_score": 0.9266897439956665,
"tags": null,
"url": "https://math.stackexchange.com/questions/3426080/a-set-of-elements-in-a-reduced-unity-ring"
} |
# Numerical approximation of $\pi$
Points are randomly scattered inside the unit square, some fall within the unit circle with probability $P=\pi/4$.
so $P$ is approximated by the fraction $$P\approx \frac{\text{Number of red points}}{\text{Number of all points}}$$ this leads $$\pi \approx 4\frac{\text{Number of red points}}{\text{Number of all points}}$$ (see following image)
There is a code for this:
tinyColor[color_, point_] := {PointSize[Small], color, Point[point]}
colorChoose[point_] :=
If[Norm[point] <= 1, tinyColor[Red, point], tinyColor[Blue, point]]
darts = RandomReal[{0, 1}, {40000, 2}];
coloredDarts =ParallelMap[colorChoose, darts];
insides = Map[Boole[Norm[#] <= 1] &, darts];
piapprox = Accumulate[insides]/Range[Length[darts]]
inner = Select[darts, Norm[#] <= 1 &];
outer = Select[darts, Norm[#] > 1 &];
Show[Plot[Sqrt[1 - x^2], {x, 0, 1}, Filling -> Axis, AspectRatio -> 1,
PlotLabel -> n == Length[darts] TildeTilde[π, 4.0*piapprox[[-1]]]],
ListPlot[{inner, outer},
PlotStyle -> {{PointSize[Tiny], Red}, {PointSize[Tiny], Blue}},
ImageSize -> {500, 500}]]
I tried to simplify this problem:
pts = RandomPoint[Rectangle[], 40000];
ListPlot[pts, AspectRatio -> 1, PlotStyle -> Blue]
The problem is following:
How can I split set of points pts into two parts, "inside the circle " and "outside the circle"?
• You should also investigate RegionMember, e.g., rf = RegionMember[Disk[]]; rf[darts] – Carl Woll Feb 10 '17 at 22:26
You can also use Select:
ptsin = Select[pts, Norm[#] < 1 &];
N[Length[ptsin]/Length[pts]]*4
(* 3.1496 *)
How can I split set of points pts into two parts, "inside the circle " and "outside the circle"?
{in, out} = SortBy[GatherBy[pts, Norm[#] < 1 &], Norm[#[[1, 1]]] &];
ListPlot[{in, out}, AspectRatio -> 1, PlotStyle -> {Red, Blue}] | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9553191297273498,
"lm_q1q2_score": 0.8548094833251971,
"lm_q2_score": 0.894789454880027,
"openwebmath_perplexity": 10416.945504887097,
"openwebmath_score": 0.18919280171394348,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/136962/numerical-approximation-of-pi"
} |
ListPlot[{in, out}, AspectRatio -> 1, PlotStyle -> {Red, Blue}]
• What's the purpose of SortBy? – anderstood Feb 4 '17 at 18:27
• The approximation of $\pi$ can be recovered with {in, out} = GatherBy[pts, Norm[#] < 1 &]; 4*Length[in]/(Length[out] + Length[in]) // N. – anderstood Feb 4 '17 at 18:30
• @anderstood, depending on the Norm of the first element in lst, the the first list in the output produced by GatherBy may be the "inside sublist" or the "outside sublist". Sorting the output of GatherBy makes the first list the "inside" one. – kglr Feb 4 '17 at 18:32
• Comment to my own comment: it should be {in, out} = SortBy[GatherBy[pts, Norm[#] < 1 &], Norm[#[[1, 1]]] &], cf kglr's comment above. – anderstood Feb 4 '17 at 18:55
Playing with Norm as shown in other answers:
pts = RandomReal[1, {40000, 2}];
4 True/(True + False) /. CountsBy[pts, Norm[#] < 1 &]
3.1474
That doesn't help with drawing the graphic however.
You can compute the norm of the point and verify if it is inside the circle.
At = 4;
d = 2;
totalpoints = 40000;
pts = RandomReal[{-1, 1}, {totalpoints, 2}];
pointsinsidecircle = Select[pts, Norm[#] < 1 &];
counter = Length[pointsinsidecircle];
approxpi = (4. At counter/totalpoints)/d^2;
Print["approx \[Pi] = ", approxpi]
ListPlot[{pts, pointsinsidecircle}, AspectRatio -> 1,
PlotStyle -> {Blue, Red}]
(*approx \[Pi] = 3.1328*)
• You could you Norm directly, and avoid AppendTo which is very slow. Using Select is probably a much faster option. – anderstood Feb 4 '17 at 18:24
• @anderstood i have changed the answer. Thank you. – Diogo Feb 4 '17 at 18:30
• Tip: Don't save graphics as JPEGs, they become fuzzy and lose color fidelity. Use PNGs. – Rahul Feb 10 '17 at 21:19
A slightly different approach:
inside = Pick[pts, Map[# ∈ Disk[] &, pts], True];
outside = Complement[pts, inside];
Also as pointed in a comment above:
inside = Pick[#, RegionMember[Disk[]][#], True] &@pts
outside = Complement[pts, inside];
` | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9553191297273498,
"lm_q1q2_score": 0.8548094833251971,
"lm_q2_score": 0.894789454880027,
"openwebmath_perplexity": 10416.945504887097,
"openwebmath_score": 0.18919280171394348,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/136962/numerical-approximation-of-pi"
} |
Knowing that for any set of real numbers $x,y,z$, such that $x+y+z = 1$ the inequality $x^2+y^2+z^2 \ge \frac{1}{3}$ holds.
Knowing that for any set of real numbers $x,y,z$, such that $x+y+z = 1$ the inequality $x^2+y^2+z^2 \ge \frac{1}{3}$ holds.
I spent a lot of time trying to solve this and, having consulted some books, I came to this:
$$2x^2+2y^2+2z^2 \ge 2xy + 2xz + 2yz$$ $$2xy+2yz+2xz = 1-(x^2+y^2+z^2)$$ $$2x^2+2y^2+2z^2 \ge 1 - x^2 -y^2 - z^2$$ $$x^2+y^2+z^2 \ge \frac{1}{3}$$ But this method is very unintuitive to me and I don't think this is the best way to solve this. Any remarks and hints will be most appreciated.
• This does seem like one of the best ways to tackle this problem. Why do you find this method counterintuitive? – vrugtehagel Apr 23 '17 at 12:56
• @vrugtehagel It entails using a property which is not directly derived from the problem. – ILoveChess Apr 23 '17 at 12:58
• Are you comfortable with calculus? Here's a completely different approach. $z = 1 - x - y$ so $x^2 + y^2 + z^2 = x^2 + y^2 +(1 - x - y)^2$. Now use some calculus to find when that has a maximum. It will be when $x = y = \frac{1}{3}$ hence also $z = \frac{1}{3}$. – badjohn Apr 23 '17 at 13:03
• – Martin Sleziak Apr 23 '17 at 21:29
Cauchy- Schwarz works: $$x^2+y^2+z^2=\frac{1}{3}(1^2+1^2+1^2)(x^2+y^2+z^2)\geq\frac{1}{3}(x+y+z)^2=\frac{1}{3}$$
$x^2+y^2+z^2$ only depends on the squared distance of $(x,y,z)$ from the origin and the constraint $x+y+z=1$ tells us that $(x,y,z)$ lies in a affine plane. The problem is solved by finding the distance between such plane and the origin: since the plane is orthogonal to the line $x=y=z$,
$$\min_{x+y+z=1}x^2+y^2+z^2 = \left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2 = \frac{1}{3}$$ and we are done.
• clear: the sphere from the origin is tangent to the plane in x=y=z=1/3 – G Cab Apr 23 '17 at 14:49
This is not a proof in itself, but if you've studied statistics, then you've seen a proof that | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462190641612,
"lm_q1q2_score": 0.8547948593085478,
"lm_q2_score": 0.8652240808393984,
"openwebmath_perplexity": 151.66039486157948,
"openwebmath_score": 0.9061225652694702,
"tags": null,
"url": "https://math.stackexchange.com/questions/2247973/knowing-that-for-any-set-of-real-numbers-x-y-z-such-that-xyz-1-the-ineq"
} |
This is not a proof in itself, but if you've studied statistics, then you've seen a proof that
$$0\le V(X)=E(X^2)-E(X)^2$$
If we now consider a random variable $X$ with three equally likely values, $X=x,y$, and $z$, then we have
$$E(X)={x+y+z\over3}\qquad\text{and}\qquad E(X^2)={x^2+y^2+z^2\over3}$$
If, in addition, we assume $x+y+z=1$, then we have $E(X)={1\over3}$, which implies $E(X^2)\ge\left(1\over3\right)^2={1\over9}$, or $x^2+y^2+z^2\ge{1\over3}$.
Excellent algebraic have been given (I voted for them). Intuition can here be obtained through visual proofs.
The equation $x+y+z=1$ defines a plane. $x^2+y^2+z^2=1/3$ defines a sphere. The following visualization depicts the plane in blue, the sphere in red.
From that, you can imagine that the question could be rephrased as (in a mundane way): for any point in the plane, its distance from the $(0,0,0)$ origin is higher than $1/\sqrt{3}$? So the sphere should remain "below" the plane. Except when they meet. The symmetry of the problem tells you that the tangency point has equal coordinates $(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})$, which is one of the motivations behind the equation $(3x-1)^2+(3y-1)^2+(3z-1)^2$.
Would the sphere be bigger (a radius higher than $1/\sqrt{3}$), it would intersect the plane in more than a single tangency point.
All in all, this resorts to finding the distance of the plane to the origin, which is exactly where the sphere and the plane meet. So what you are looking at is the distance of the plane to the origin.
If the plane is given by $ax+by+cz+d$, the signed distance of a point $(x_0,y_0,z_0)$ to it is (Point-Plane Distance):
$$D = \frac{ax_0+by_0+cz_0+d}{\sqrt{a^2+b^2+c^2}}$$
which in your case gives exactly $1/\sqrt{3}$. Any point in the plane is farther to $(0,0,0)$ than $|D|$.
Hint:
Expand $(3x-1)^2+(3y-1)^2+(3z-1)^2\geqslant 0$ And simplify.
If you rewrite your proof as:
$3 ( x^2+y^2+z^2 ) \ge ( x^2+y^2+z^2 ) + ( 2xy+2yz+2zx ) = (x+y+z)^2 = 1$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462190641612,
"lm_q1q2_score": 0.8547948593085478,
"lm_q2_score": 0.8652240808393984,
"openwebmath_perplexity": 151.66039486157948,
"openwebmath_score": 0.9061225652694702,
"tags": null,
"url": "https://math.stackexchange.com/questions/2247973/knowing-that-for-any-set-of-real-numbers-x-y-z-such-that-xyz-1-the-ineq"
} |
$3 ( x^2+y^2+z^2 ) \ge ( x^2+y^2+z^2 ) + ( 2xy+2yz+2zx ) = (x+y+z)^2 = 1$.
You would find that it is not so unintuitive after all.
Since you know that $x+y+z = 1$, a natural thing to do to the original equation is to homogenize it; namely make all terms have the same degree. This gives us "$3 ( x^2+y^2+z^2 ) \ge (x+y+z)^2$, and expanding out immediately tells us the solution.
In general, a technique that works for many cyclic polynomial inequalities is to try to 'smooth' the terms out. Terms that are just a power of one variable are the 'biggest', and using inequalities like "$x^2+y^2 \ge 2xy$" will 'mix' powers and thereby 'reduce' them.
It is enough to prove the result for $x,y,z$ positive since $$\frac{|x|+|y|+|z|}{3} \geq \frac{x+y+z}{3}$$
One can use the generalized AM-GM inequality: If $$M_p = \left(\frac{x^p+y^p+z^p}{3}\right)^{\frac{1}{p}}$$ then for $p < q$, $M_p \leq M_q$ with equality holding if and only if $x=y=z$. Here, using $M_2 \geq M_1$, we get $$\left(\frac{x^2+y^2+z^2}{3}\right)^{\frac{1}{2}} \geq \frac{|x|+|y|+|z|}{3} \geq \frac{x+y+z}{3} = \frac{1}{3}$$
We can minimize $x^2 + y^2 + z^2$ subject to the constraint $x+y+z = 1$ using Lagrange multipliers, we then find that $x = y = z = \frac{1}{3}$, therefore $x^2 + y^2 + z^2\geq\frac{1}{3}$.
• This only shows that this is a local minimum, right? – Carsten S Apr 24 '17 at 14:05
One more way to prove it is by substituting out for $z$: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462190641612,
"lm_q1q2_score": 0.8547948593085478,
"lm_q2_score": 0.8652240808393984,
"openwebmath_perplexity": 151.66039486157948,
"openwebmath_score": 0.9061225652694702,
"tags": null,
"url": "https://math.stackexchange.com/questions/2247973/knowing-that-for-any-set-of-real-numbers-x-y-z-such-that-xyz-1-the-ineq"
} |
One more way to prove it is by substituting out for $z$:
$$x^2+y^2+z^2 = x^2+y^2+(1-x-y)^2=2x^2+2y^2+2xy-2x-2y+1$$ Now, substitute $x=\hat x+a$ and $y=\hat y+b$. We get $$2\hat x^2+2\hat y^2+2\hat x\hat y+(4a+2b-2)\hat x+(2a+4b-2)\hat y+(2(a^2+ab+b^2-a-b)+1)$$ We can eliminate the order 1 terms by letting $4a+2b-2=2a+4b-2=0$, which gives $a=b=\frac13$. With this substitution, we have $$2(\hat x^2+\hat x\hat y+\hat y^2)+\frac13$$ We can express the bracketed term as a sum of squares, giving us $$3(\hat x+\hat y)^2+(\hat x-\hat y)^2 + \frac13$$ and we can see that the smallest value this can take is $\frac13$. Indeed, it takes this value when $\hat x=\hat y=0$ - that is, when $x=y=\frac13$.
Ok, you want an intuitive proof, not a better proof. That's fine.
First note that for $x=y=z$, we have $x^2+y^2+z^2 = 3\left(\frac13\right)^2= \frac13$. Next let us show that $x^2+y+2+z^2$ is not minimal if $x\ne y$ or $y\ne z$. Indeed if $a\ne b$ then $$a^2 + b^2 - \left(\left(\frac{a+b}2\right)^2+\left(\frac{a+b}2\right)^2\right) = \frac12\left( a^2 + b^2 -2ab \right)=\frac12(a-b)^2>0,$$ so we can replace two of the numbers be their averages and lower the sum of squares.
Unfortunately we not yet quite done. We are done if we can show that there is a global minimum of $x^2+y^2+z^2$, given that $x+y+z=1$. Because then that global minimum must also be a local minimum, but for that we cannot have $x\ne y$ or $y\ne z$, so $x=y=z$ must be the global minimum, and its value is $\frac13$.
For that we can first argue that we can restrict ourselves to $x,y,z\ge0$ (e.g. if $x<0$ replace $(x,y,z)$ by $\frac1{-x+y+z}(-x,y,z)$, which yields a smaller value since $-x+y+z = 1 - 2x > 1$) and then appeal to the compactness of $\{(x,y,z)\colon \text{$x+y+z=1$,$x,y,z\ge0$}\}$.
I did not say that this would be pretty, but it is pretty intuitive to me :) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462190641612,
"lm_q1q2_score": 0.8547948593085478,
"lm_q2_score": 0.8652240808393984,
"openwebmath_perplexity": 151.66039486157948,
"openwebmath_score": 0.9061225652694702,
"tags": null,
"url": "https://math.stackexchange.com/questions/2247973/knowing-that-for-any-set-of-real-numbers-x-y-z-such-that-xyz-1-the-ineq"
} |
I did not say that this would be pretty, but it is pretty intuitive to me :)
Here's another way to approach this. It's easy to see that the value of $\frac13$ is obtained when each of $x, y, z$ is $\frac13$. We want to show that as the variables deviate from this point (with their sum still being 1) the value cannot decrease.
So we look at the deviations from $\frac13$: $x=\frac13+\epsilon_1$, $y=\frac13+\epsilon_2$, $z=\frac13+\epsilon_3$ with $\epsilon_1+\epsilon_2+\epsilon_3=0$. you have
$x^2+y^2+z^2=\\ (\frac13+\epsilon_1)^2+(\frac13+\epsilon_2)^2+(\frac13+\epsilon_3)^2=\\\left(\frac19+\frac23\epsilon_1+\epsilon_1^2\right)+\left(\frac19+\frac23\epsilon_2+\epsilon_2^2\right)+\left(\frac19+\frac23\epsilon_3+\epsilon_3^2\right)=\\ \left(\frac19+\frac19+\frac19\right)+\frac23(\epsilon_1+\epsilon_2+\epsilon_3)+(\epsilon_1^2+\epsilon_2^2+\epsilon_3^2)=\\ \frac13+(\epsilon_1^2+\epsilon_2^2+\epsilon_3^2) \ge \frac13$
The desired inequality follows by noting that from the convexity of $f(t) = t^{2},$ we have $f\left(\dfrac{x+y+z}{3}\right) \leq \dfrac{f(x)+f(y)+f(z)}{3}$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462190641612,
"lm_q1q2_score": 0.8547948593085478,
"lm_q2_score": 0.8652240808393984,
"openwebmath_perplexity": 151.66039486157948,
"openwebmath_score": 0.9061225652694702,
"tags": null,
"url": "https://math.stackexchange.com/questions/2247973/knowing-that-for-any-set-of-real-numbers-x-y-z-such-that-xyz-1-the-ineq"
} |
Combinatorics and Probability Problem
The problem I am working on is:
An ATM personal identification number (PIN) consists of four digits, each a 0, 1, 2, . . . 8, or 9, in succession.
a.How many different possible PINs are there if there are no restrictions on the choice of digits?
b.According to a representative at the author’s local branch of Chase Bank, there are in fact restrictions on the choice of digits. The following choices are prohibited: (i) all four digits identical (ii) sequences of consecutive ascending or descending digits, such as 6543 (iii) any sequence start-ing with 19 (birth years are too easy to guess). So if one of the PINs in (a) is randomly selected, what is the prob-ability that it will be a legitimate PIN (that is, not be one of the prohibited sequences)?
c. Someone has stolen an ATM card and knows that the first and last digits of the PIN are 8 and 1, respectively. He has three tries before the card is retained by the ATM (but does not realize that). So he randomly selects the $2nd$ and $3^{rd}$ digits for the first try, then randomly selects a different pair of digits for the second try, and yet another randomly selected pair of digits for the third try (the individual knows about the restrictions described in (b) so selects only from the legitimate possibilities). What is the probability that the individual gains access to the account?
d.Recalculate the probability in (c) if the first and last digits are 1 and 1, respectively.
---------------------------------------------
For part a): The total number of pins without restrictions is $10,000$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462197739625,
"lm_q1q2_score": 0.8547948496224866,
"lm_q2_score": 0.865224070413529,
"openwebmath_perplexity": 456.9605838745141,
"openwebmath_score": 0.775648295879364,
"tags": null,
"url": "http://math.stackexchange.com/questions/293921/combinatorics-and-probability-problem"
} |
For part a): The total number of pins without restrictions is $10,000$
For part b): The number of pins in either ascending or descending order is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known, then the three other spots containing digits are already spoken for. The number of pins where each slot contains the same digit is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known there is only one option left to the rest of the slots. The number of pins that have their first and second slot occupied by 1 and 9, respectively, is $1 \cdot 1 \cdot 10 \cdot 10 \cdot$. So, if R is the set that contains these restricted pins, then $|R| = 130$; and if N is the set that contains the non-restricted ones, meaning R and N are complementary sets, then $|N| = 10,000 - 130$. Hence, the probability is then $P(N) = 9780/10000 = 0.9870.$ However, the answer is $0.9876$. What did I do wrong?
For part c): The sample space, containing all of the outcomes of the experiment that will take place, is $|N|=9870$. When it says that the thief won't use the same pair of digits in each try, does that not allow him trying the pin 8 5 2 1 in one try and the pin 8 2 5 1 in another try?
-
Ascending or descending is $14$, not $20$. That takes care of the disagreement. – André Nicolas Feb 3 '13 at 20:44
@AndréNicolas I didn't get 14 or 20, I got 10. Your saying the answer is 14, how did you get that? – Mack Feb 3 '13 at 20:56
@Eli: You got $2\cdot10=20$ for ascending and descending together, and André is saying that it should be $2\cdot7=14$ for both together. – joriki Feb 3 '13 at 21:04
Oh, I see. The 2 corresponds to the two choices (ascending or descending); and the 7 corresponds to the fact that you can't start the 4-digit pin with 7,8, or 9 when ascending, and you can't start a 4-digit pin with 0,1, or 2 when descending. – Mack Feb 3 '13 at 21:14
For b): Which is the descending sequence starting with $1$ that you counted? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462197739625,
"lm_q1q2_score": 0.8547948496224866,
"lm_q2_score": 0.865224070413529,
"openwebmath_perplexity": 456.9605838745141,
"openwebmath_score": 0.775648295879364,
"tags": null,
"url": "http://math.stackexchange.com/questions/293921/combinatorics-and-probability-problem"
} |
For b): Which is the descending sequence starting with $1$ that you counted?
For c): Good question; the problem is badly worded in that regard. Taking it literally, I'd tend to interpret it as referring to unordered pairs, but since it makes little sense to couple two different PINs in this manner, I suspect that they actually mean ordered pairs. However, note that the answer doesn't depend on this.
I understand neither why the question says that the thief knows the restrictions, nor why you say that the sample space has size $9870$. The thief knows that the first and last digits are $8$ and $1$, respectively; that's not compatible with any of the sequences excluded by the restrictions, and it doesn't allow for $9870$ possibilities.
-
I’d interpret is as referring to ordered pairs, partly from the language, and partly from the overall level of difficulty of the exercise. – Brian M. Scott Feb 3 '13 at 20:41
@joriki I don't really understand what you are asking when you say, "Which is the descending sequence starting with 1 that you counted?" – Mack Feb 3 '13 at 21:00
@Eli: Exactly. And $10-3=7$. – joriki Feb 3 '13 at 21:08
@Eli: I found another version of the book online with the answers included. It has $.0337$ for d., which is also wrong, but can be interpreted as a rounded version of the correct result $3/89$ (see Metin's answer). So it seems they just round results without indicating it (which is rather bad style). So it seems likely that $0.0333$ is a rounded version of $3/90$. That leaves the question how they arrive at $90$ options. Perhaps they meant to say $1$ and $8$ instead of $8$ and $1$; then the birth year rule would lead to a count of $90$. – joriki Feb 4 '13 at 8:35
@Eli: That would also explain why part c already mentions that the thief knows the restrictions, even though that's irrelevant for the question as posed. – joriki Feb 4 '13 at 8:37
For d): We have the case: $1$ * * $1$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462197739625,
"lm_q1q2_score": 0.8547948496224866,
"lm_q2_score": 0.865224070413529,
"openwebmath_perplexity": 456.9605838745141,
"openwebmath_score": 0.775648295879364,
"tags": null,
"url": "http://math.stackexchange.com/questions/293921/combinatorics-and-probability-problem"
} |
For d): We have the case: $1$ * * $1$.
But the thief knows, by prohibition (i), it can not be $1111$. Thus he eleminates $1$ possibility.
Also he knows, by (iii), the second digit can not be $9$. There are exactly $10$ number of the form $19$ * $1$, namely, $1900$, $1910$, $1920$... So, at this stage he eleminates $10$ possibilities.
All in all, if he had no restrictions, there were $100$ choices for the form $1$ * * $1$. But he excluded $10 + 1 = 11$ of them and get 89 possible choices. Since he has 3 chances, the resulting possibility is $3/89$.
-
Just my two cents,
10^4 possibilities.
There are 14 ascending and descending groups of 4.
Keyspace 9,876
If the badguy knows two of the four spaces, he only has to guess through entropy 10^2. (None of the restrictions meet up with the range 8xx1)
3 Tries in 100
:)
-
(0123)(1234)(2345)(3456)(4567)(5678)(6789) And then the reverse of those... – Ben Sep 9 '13 at 5:33
Regarding part [c], I do not think that there is something wrong with the book. As @Mack said in one of his comments, thief has to guess only two second numbers. Since the given restrictions actually do not apply on this order of numbers, he has to guess among 100 (10*10) possible combinations of the numbers.
Since he has 3 tries, the probability that he will gain desired access is 3/100.
According to Wolfram Alpha, 3/100 is exactly 0.03, which is the same as written solution from the book.
Correct me if I am wrong.
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462197739625,
"lm_q1q2_score": 0.8547948496224866,
"lm_q2_score": 0.865224070413529,
"openwebmath_perplexity": 456.9605838745141,
"openwebmath_score": 0.775648295879364,
"tags": null,
"url": "http://math.stackexchange.com/questions/293921/combinatorics-and-probability-problem"
} |
# Condition for equilibrium of a rigid body on a horizontal surface
The following question is from a past paper from Further Mechanics and it has been bothering me immensely I have spent hours cracking at a way to make sense of it, the question is in two parts
Part 1:
An object consists of a uniform solid circular cone, of vertical height 4r and radius 3r, and a uniform solid cylinder, of height 4r and radius 3r. The circular base of the cone and one of the circular faces of the cylinder are joined together so that they coincide. The cone and the cylinder are made of the same material.
Find the distance of the centre of mass of the object from the end of the cylinder that is not attached to the cone.
I first roughly sketched the object as such
Then finding the $$\bar{x}$$ of each part seperately:
For the cone I used the standard result of the centre of mass being $$\frac{1}{4}r$$ away from the base:
$$\bar{x}_{Cone}=\frac{1}{4}.4r + 4r=5r$$
For the cylinder I derived $$\bar{x}$$ via integration:
$$y=3r$$
$$\bar{x}_{cylinder}=\frac{\int_0^{4r} xdV}{\int_0^{4r}dV}$$
$$\because dV=y^2 \pi dx$$
$$\implies \bar{x}_{cylinder}=\frac{\int_0^{4r} 9r^2 \pi x dx}{\int_0^{4r}9r^2 \pi dx}$$
$$\implies \bar{x}_{cylinder}=\frac{9r^2 \pi \int_0^{4r} x dx}{9r^2 \pi\int_0^{4r} 1 dx}$$
$$\implies \bar{x}_{cylinder}=\frac{\frac{1}{2}\left[x^2 \right]^{4r}_0}{\left[x \right]^{4r}_0}$$
$$\therefore \bar{x}_{cylinder}=2r$$
Then by taking the weighted average of both objects I arrived at the correct value for the distance of the centre of gravity from the base of the cylinder which was
$$\bar{x}=\frac{11}{4}r$$
However the next part has had me at a complete loss for the better part of the day, it states that:
Show that the object can rest in equilibrium with the curved surface of the cone in contact with a horizontal surface.
I tried coming up with a rough sketch for this also | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471665987074,
"lm_q1q2_score": 0.8547927537129945,
"lm_q2_score": 0.8688267677469952,
"openwebmath_perplexity": 216.23917361140576,
"openwebmath_score": 0.7066793441772461,
"tags": null,
"url": "https://engineering.stackexchange.com/questions/41139/condition-for-equilibrium-of-a-rigid-body-on-a-horizontal-surface"
} |
I tried coming up with a rough sketch for this also
However I do not understand how to tackle this question at all, all I know is that for a body to be at equilibrium on such a surface, the centre of mass must pass through the point of suspension, however clearly this does not give an insight on how to answer this.
The condition for this question given in the marking scheme is that
Can someone explain what this means and why this is the case?
Given the geometry of the object, the center of gravity is very close to the surface between the cylinder and the cone. More specifically the distance between the center of gravity and the base of the cone is $$\frac{1}{4}r$$.
Basically what you need to prove is that the angle of the cone $$\phi$$ is such, so that when the object is tilted the weight crosses over the last $$\frac{1}{4}r$$.
So you need to prove that the angle $$\theta$$ is smaller that angle $$\phi$$.
Angle $$\theta$$ is calculated as: $$\tan\theta = \frac{\text{distance to be covered}}{\text{cylinder radius}}= \frac{\frac{11}{4}r}{3r}=\frac{11}{12} \Rightarrow \theta =42.5 [deg]$$
while Angle $$\phi$$ is calculated as: $$\tan\phi= \frac{\text{height of cone}}{\text{cone radius}}= \frac{4r}{3r}=\frac{4}{3} \Rightarrow \phi =53.1[deg]$$
Therefore, since $$\phi> \theta$$, the object becomes vertical enough so that the weight passes through the coned surface.
– NMech
Mar 23, 2021 at 9:52
• Reminds me of a crafty problem my undergrad QM prof posed. I'll adapt it to this particular object: using Heisenberg's Uncertainty Principle, how long will this object remain stable resting on the cone? It's not intended as a "real-world" problem but rather to see how you might use paired-parameters to guesstimate the stability. Mar 24, 2021 at 12:43
The CG is found by assuming the mass of the cone as 1/3m and the cylinder m, then
$$\bar{X}= \frac{2r*m+5r*m/3}{4/3m} =r*11/4$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471665987074,
"lm_q1q2_score": 0.8547927537129945,
"lm_q2_score": 0.8688267677469952,
"openwebmath_perplexity": 216.23917361140576,
"openwebmath_score": 0.7066793441772461,
"tags": null,
"url": "https://engineering.stackexchange.com/questions/41139/condition-for-equilibrium-of-a-rigid-body-on-a-horizontal-surface"
} |
$$\bar{X}= \frac{2r*m+5r*m/3}{4/3m} =r*11/4$$
The cone side length (like the sharp tip of a pencil) is $$5r$$, the side of a 3,4,5 triangle, and the interior half tip angle is 36.87 degrees.
The CG is $$5.25r$$ from the tip of the cone and at rotation to rest on the side of the cone will be at $$5.25r*cos36.87= 4.19r<5r$$
I let you prove the condition.
It is well within the footprint of the pencil shape so it is stable. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471665987074,
"lm_q1q2_score": 0.8547927537129945,
"lm_q2_score": 0.8688267677469952,
"openwebmath_perplexity": 216.23917361140576,
"openwebmath_score": 0.7066793441772461,
"tags": null,
"url": "https://engineering.stackexchange.com/questions/41139/condition-for-equilibrium-of-a-rigid-body-on-a-horizontal-surface"
} |
# Transport equation $u_t + xu_x + u = 0$ with $u(x_0, 0) = \cos(x_0)$
I have been studying PDEs using Peter Olver's textbook. I have learnt how to solve equations such as $u_t + 2u_x = \sin(x)$ subject to an initial condition such as $u(0,x) = \sin x$. Letting $\epsilon = x - 2t$ and $u(t,x) = v(t,\epsilon)$, I then plug this into the transport equation.
However, I am not sure how to define a characteristic to solve the following equation
$$u_t + xu_x + u = 0, \qquad u(x_0, 0) = \cos(x_0)$$
because it has a variable 'speed' term $x$ and it is also not homogenous because of the term $u$.
A solution would be very helpful so I can see how to approach these problems.
• If you change coordinates to $x = e^y$ then $u_y = xu_x$ so your PDE in these new coordinates simplifies to $u_t + 2u_y + u = 0$. You can even simplify it further by taking $v = e^{t} u$ then $v_t + 2v_y = 0$ which is the equation you say you already know how to solve. – Winther May 9 '18 at 17:48
$$u_t + xu_x = -u$$ $\frac{dt}{1}=\frac{dx}{x}=\frac{du}{-u}$
First characteristics, from $\quad \frac{dt}{1}=\frac{dx}{x}$ :
$$x\,e^{-t}=c_1$$
Second characteristics, from $\quad\frac{dx}{x}=\frac{du}{-u}$ : $$x\,u=c_2$$ General solution of the PDE : $\quad x\,u=F(x\,e^{-t})$
$$u(x,t)=\frac{1}{x}F(x\,e^{-t})$$ $F$ is an arbitrary function, to be determined according to the boundary condition.
Condition : $\quad u(x_0, 0) = \cos(x_0)=\frac{1}{x_0}F(x_0\,e^{0})$
$F(x_0)=x_0\cos(x_0)$. Now the function $F$ is determined, i.e.: $F(X)=X\cos(X)$.
We put it into the above general solution , where $X=x\,e^{-t}$ , thus $F(x\,e^{-t})=(x\,e^{-t})\cos(x\,e^{-t})$ :
$$u(x,t)=\frac{1}{x}(x\,e^{-t})\cos(x\,e^{-t})=e^{-t}\cos(x\,e^{-t})$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471684931717,
"lm_q1q2_score": 0.8547927520173425,
"lm_q2_score": 0.8688267643505193,
"openwebmath_perplexity": 168.1841414177281,
"openwebmath_score": 0.9334983229637146,
"tags": null,
"url": "https://math.stackexchange.com/questions/2773993/transport-equation-u-t-xu-x-u-0-with-ux-0-0-cosx-0"
} |
$$u(x,t)=\frac{1}{x}(x\,e^{-t})\cos(x\,e^{-t})=e^{-t}\cos(x\,e^{-t})$$
• Hi, thank you. Can you explain how you obtain your general solution of the PDE from the two characteristics? – PhysicsMathsLove May 11 '18 at 10:37
• The general solution can be expressed on various forms. For example on the form of implicit equation : $\Phi(c_1,c_2)=0$, or $c_1=f(c_2)$ or $c_2=F(c_1)$ or other forms. All functions introduced are arbitrary, but they are related one to the other. Doesn't matter the form chosen, the final result is the same after applying the boundary condition. – JJacquelin May 11 '18 at 10:51
• Ok I am still not sure how you obtain $xu = F(xe^{-t})$ from your characteristics, I.e. how it depends on $xe^{-t}$? – PhysicsMathsLove May 11 '18 at 10:57
• $c_2=F(c_1)$ with $c_2=xu$ and $c_1=xe^{-t}$ gives $xu=F(xe^{-t})$. – JJacquelin May 11 '18 at 11:02
• Why do you know one constant is a function of the other? – PhysicsMathsLove May 11 '18 at 11:12
Let us apply the method of characteristics. We get the characteristic equations $$\frac{\text{d} t}{\text{d} s} = 1 \, , \qquad \frac{\text{d} x}{\text{d} s} = x \, , \qquad \frac{\text{d} u}{\text{d} s} = -u \, .$$ Letting $t(0) = 0$, we know $t=s$. Letting $x(0) = x_0$, we get $x(t) = x_0\, e^t$. Since $u(0) = \cos(x_0)$, we have $u(t) = \cos(x_0)\, e^{-t}$. Finally, $$u(x,t) = \cos(x\, e^{-t})\, e^{-t} \, .$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471684931717,
"lm_q1q2_score": 0.8547927520173425,
"lm_q2_score": 0.8688267643505193,
"openwebmath_perplexity": 168.1841414177281,
"openwebmath_score": 0.9334983229637146,
"tags": null,
"url": "https://math.stackexchange.com/questions/2773993/transport-equation-u-t-xu-x-u-0-with-ux-0-0-cosx-0"
} |
Note that \begin{align} \frac{\rm d}{{\rm d}t}u(e^t,t+t_0)&=\frac{\partial u}{\partial t}(e^t,t+t_0)+e^t\frac{\partial u}{\partial x}(e^t,t+t_0)\\ &=\left(\frac{\partial u}{\partial t}+x\frac{\partial u}{\partial x}\right)(e^t,t+t_0)\\ &=-u(e^t,t+t_0) \end{align} holds for all $t$ and $t_0$. Thus $$\frac{\rm d}{{\rm d}t}u(e^t,t+t_0)+u(e^t,t+t_0)=0,$$ or equivalently, $$\frac{\rm d}{{\rm d}t}\left(e^tu(e^t,t+t_0)\right)=0.$$ Therefore, $$e^tu(e^t,t+t_0)=e^{-t_0}u(e^{-t_0},-t_0+t_0)=e^{-t_0}u(e^{-t_0},0)=e^{-t_0}\cos e^{-t_0},$$ or equivalently, $$u(e^t,t+t_0)=e^{-t-t_0}\cos e^{-t_0}.$$ Finally, let $x=e^t>0$ and $\tau=t+t_0$. This gives $t=\log x$ and $t_0=\tau-\log x$. Hence $$u(x,\tau)=e^{-\log x-\tau+\log x}\cos e^{-\tau+\log x}=e^{-\tau}\cos\left(e^{-\tau}x\right).$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471684931717,
"lm_q1q2_score": 0.8547927520173425,
"lm_q2_score": 0.8688267643505193,
"openwebmath_perplexity": 168.1841414177281,
"openwebmath_score": 0.9334983229637146,
"tags": null,
"url": "https://math.stackexchange.com/questions/2773993/transport-equation-u-t-xu-x-u-0-with-ux-0-0-cosx-0"
} |
# count the pairs in a set of Data
I have a list of Data, e.g.,
list1={0, 1, 1, 1, 1, 2, 2, 3, 3}
list2={0, 1, 1, 1, 1, 2, 2, 3, 3, 0}
and I want to get as an output True or False if I have only even number of Data of similar or not. in the above example I like to write a code that returns False for list1 and True for list2, since in list1 I have only one 0, but in the list2 there are even number of each number.
Thanks!
## 4 Answers
Using Counts
f = And @@ EvenQ[Values[Counts[#]]] &
{f[list1], f[list2]}
{False, True}
• +1. Or f = Counts /* AllTrue[EvenQ] – WReach Nov 1 '17 at 14:27
• Yours is much concise. I think you should post it as a separate answer. – Anjan Kumar Nov 1 '17 at 14:30
• I did, but when I saw yours, I deleted it :) I shall undelete. – WReach Nov 1 '17 at 14:36
Given:
f = Counts /* AllTrue[EvenQ];
Then:
f[list1]
(* False *)
f[list2]
(* True *)
You can use the Tally function:
Tally[list]
tallies the elements in list, listing all distinct elements together with their multiplicities.
So as
And@@EvenQ[Tally[list1][[;; , 2]]]
or
And@@EvenQ[Tally[list2][[;; , 2]]]
check[list_] := If[
Cases[EvenQ[Count[list, #] & /@ Union[list]], False] == {},
True, False]
check[list2]
True | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914018751051,
"lm_q1q2_score": 0.8547899790779508,
"lm_q2_score": 0.8840392863287584,
"openwebmath_perplexity": 1909.0496921774816,
"openwebmath_score": 0.2827434539794922,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/159043/count-the-pairs-in-a-set-of-data"
} |
# Sum standard deviation vs standard error
I'm having difficulty in determining what exactly the difference is between the 2, especially when given an exercise and I have to choose which of the 2 to use. These is how my text book describes them:
Sum standard deviation
Given is a population with a normally distributed random variable $X$. When you have a sample $n$ from this population the population is:
$X_{sum} = X_1 + X_2 ... + X_n$ with
$\mu_{Xsum} = n \times \mu_x$ and $\sigma_{Xsum} = \sqrt{n} \times \sigma_x$.
Standard error
When you have a normally distributed random variable $X$ with mean $\mu_X$ and standard deviation $\sigma_X$ and sample length $n$, the sample mean $\bar{X}$ is normally distributed with $\mu_{\bar{x}} = \mu_X$ and $\sigma_{\bar{x}} = \dfrac{\sigma_X}{\sqrt{n}}$
These 2 are awefully similair to me to the point I can't at all decide which to use where. Here are the problems where I discovered I couldn't:
Problem 1
A filling machine fills bottles of lemonade. The amount is normally distributed with $\mu = 102 \space cl$.
$\sigma$ = $1.93\space cl$.
• Calculate the chance that out of 12 bottles the average volume is $100 \space cl$.
The problem itself is easy, however the troublesome part is what to choose for the standard deviation of the sample. Here they use $\dfrac{1.93}{\sqrt{12}}$ which I can live with, until I encountered the second problem.
Problem 2
A tea company puts 20 teabags in one package. The weight of a teabag is normally distributed with $\mu = 5.3 \space g$ and $\sigma = 0.5 \space g.$
• Calculate the chance that a package weighs less than 100 grams.
Here I thought they'd also use $\dfrac{0.5}{\sqrt{20}}$, but instead they use $\sqrt{20} \times 0.5$.
Can someone clear up the confusion? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914018751051,
"lm_q1q2_score": 0.854789971692782,
"lm_q2_score": 0.8840392786908831,
"openwebmath_perplexity": 447.38077730219237,
"openwebmath_score": 0.8502633571624756,
"tags": null,
"url": "https://stats.stackexchange.com/questions/48133/sum-standard-deviation-vs-standard-error"
} |
Can someone clear up the confusion?
• You should tag this as "homework" as well, since it seems to be a homework question. – Placidia Jan 20 '13 at 17:33
• @Placidia Are you kidding me?! This isn't homework, this is about understanding and differentiating 2 general concepts in statistics, which then could be implemented in homework questions.. like every other mathematical concept.. – JohnPhteven Jan 20 '13 at 17:36
• My textbook confused me; (Freedman, Pisani, Purves, Statistics, Fourth Edition). The chapter is titled "The Standard Error", uses $\sqrt{n} \times \sigma_x$ but the acronym is "SE", which must be a hint that my textbook is referring to "sum standard deviation"; indeed the textbook describes the experiment: "When drawing at random with replacement from a box of numbered tickets; the standard error for the sum of the draws is..." – The Red Pea Nov 7 '19 at 7:26
• ... more from the confusing part of Statistics textbook Freedman, Pisani, Purves, Fourth Edition: "In this book, we use SD for data and SE for chance quantities (random variables). This distinction is not standard and the term SD is often used in both situations" Indeed, as in this SE question, we refer to the "SD of X" ($\sigma_{X}$) and the "SD of sum of X" ($\sigma_{X_{sum}}$) – The Red Pea Nov 7 '19 at 7:37
The sum standard deviation is, as the name suggests, the standard deviation of the sum of $n$ random variables. The standard error you're talking about is just another name for the standard deviation of the mean of $n$ random variables. As you noted, the two formulas are closely related; since the sum of $n$ random variables is $n$ times the mean of $n$ random variables, the standard deviation of the sum is also $n$ times the standard deviation of the mean:
$\sigma_{X_{sum}} = \sqrt n\sigma_X = n \times \frac{\sigma_X}{\sqrt n} = n\times \sigma_\bar{X}$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914018751051,
"lm_q1q2_score": 0.854789971692782,
"lm_q2_score": 0.8840392786908831,
"openwebmath_perplexity": 447.38077730219237,
"openwebmath_score": 0.8502633571624756,
"tags": null,
"url": "https://stats.stackexchange.com/questions/48133/sum-standard-deviation-vs-standard-error"
} |
$\sigma_{X_{sum}} = \sqrt n\sigma_X = n \times \frac{\sigma_X}{\sqrt n} = n\times \sigma_\bar{X}$.
In the first problem you are dealing with a mean, the average of twelve bottles, so you use the standard deviation of the mean, which is called standard error. In the second problem you are dealing with a sum, the total weight of 20 packages, so you use the standard deviation of the sum.
Summary: use standard error when dealing with the mean (averages); use sum standard deviation when dealing with the sum (totals).
• But what I think is that theyask one about the sum of 12 bottles, and the mean of that sum? In other words, they're too similair to me.. – JohnPhteven Jan 20 '13 at 18:37
• There's no sum in question one. Each bottle is filled with an amount given by a normal distribution with mean 102, the question asks about the mean of twelve bottles. Where do you see a sum? – Jonathan Christensen Jan 20 '13 at 18:45
• Oh wait nevermind, I was being a little bit blind! In the first one they ask about the MEAN (i.e. average) out of a sample, in the second they transform a sample of 6 into '1' object (>namely, the box of teabags), with its own SD and M! – JohnPhteven Jan 20 '13 at 18:45
• I think I was writing my response the same time you were doing yours. Nice answer. – Placidia Jan 20 '13 at 18:46
• I meant 20, my brain is random, no idea how I got to 6 – JohnPhteven Jan 20 '13 at 18:48
The first standard deviation formula you gave is the SD for a sum. The standard error is the SD of the sample mean. Remember that: $\text{Var}(aX)=a^2 \text{Var}(X)$ and the variance of the sum is the sum of the variances (First formula). So
$\text{Var}(\bar{X})=\frac{n\sigma^2}{n^2}=\sigma^2/n$. Taking the square root gives the result.
Recall:
$\text{Var}(\sum X_i)=\sum (\text{Var}(X_i)=n \sigma^2.$ The Variance of the sums. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914018751051,
"lm_q1q2_score": 0.854789971692782,
"lm_q2_score": 0.8840392786908831,
"openwebmath_perplexity": 447.38077730219237,
"openwebmath_score": 0.8502633571624756,
"tags": null,
"url": "https://stats.stackexchange.com/questions/48133/sum-standard-deviation-vs-standard-error"
} |
Recall:
$\text{Var}(\sum X_i)=\sum (\text{Var}(X_i)=n \sigma^2.$ The Variance of the sums.
Problem 1 is looking for a statement about the sample mean; Problem 2 is about the sum, since the weight of the package is the sum of the weights of individual tea bags.
• Nice answer, +1, but I gave the other one a best answer since I read it first and it answered my question first. – JohnPhteven Jan 20 '13 at 18:48 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914018751051,
"lm_q1q2_score": 0.854789971692782,
"lm_q2_score": 0.8840392786908831,
"openwebmath_perplexity": 447.38077730219237,
"openwebmath_score": 0.8502633571624756,
"tags": null,
"url": "https://stats.stackexchange.com/questions/48133/sum-standard-deviation-vs-standard-error"
} |
Minimum number of iterations in Newton's method to find a square root
I am writing an algorithm that evaluates the square root of a positive real number $y$. To do this I am using the Newton-Raphton method to approximate the roots to $f(x)=x^2-y$. The $n^{th}$ iteration gives $$x_n=\frac{x_{n-1}^2+y}{2x_{n-1}}$$ as an approximation to $\sqrt{y}$. I found that starting with an initial guess $x_0=1$ works pretty well generally, so an answer to the question below that assumes $x_0=1$ is fine.
My question: is there an exact expression for the minimum $N$ of iterations needed to attain a given precision $p$ in the approximate solution $x_N$? In other words I'm looking for the smallest integer $N$ such that $$\left|\frac{x_N-\sqrt y}{\sqrt y}\right|<p.$$
I've thought about this for a while and played around with the expression for the errors $\epsilon_n = x_n - \sqrt y$ which can be shown to satisfy $\epsilon_{n+1}=\epsilon_n^2\,/\,2x_n$, but I can't find an answer. I've looked around on Google but I couldn't find an answer either.
Any pointers to a solution online or help would be much appreciated. A follow-up would of course be: can $x_0$ be optimised (while being a simple enough expression in terms of $y$) in order to minimise $N$? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914017797636,
"lm_q1q2_score": 0.8547899619877225,
"lm_q2_score": 0.8840392695254319,
"openwebmath_perplexity": 264.4876199650986,
"openwebmath_score": 0.866824746131897,
"tags": null,
"url": "https://math.stackexchange.com/questions/558145/minimum-number-of-iterations-in-newtons-method-to-find-a-square-root"
} |
• I don't have an answer to your overall question, but here's a little bit of intuition: $x_n$ is the average of $x_{n-1}$ and $y/x_{n-1}$. One of these numbers must be less than $\sqrt{y}$, and one must be greater, so the average is a reasonable update. And of course, the closer $x_0$ is to $\sqrt{y}$, the better. Something like $x_0 = \lfloor \sqrt{y}\rfloor$ is a good guess (take the biggest perfect square less than $y$, and start with its square root). – BaronVT Nov 9 '13 at 16:58
• Try analyzing $x_n^2 - y$ rather than $x_n - \sqrt{y}$. I imagine you probably want to analyze the roundoff error too, rather than assuming addition, multiplication and division are exact, which makes the problem more complicated. – Hurkyl May 2 '14 at 8:21
There is such a formula: consider
$$\frac{x_n+\sqrt y}{x_n-\sqrt y}=\frac{\frac{x_{n-1}^2+y}{2x_{n-1}}+\sqrt y}{\frac{x_{n-1}^2+y}{2x_{n-1}}-\sqrt y}=\frac{(x_{n-1}+\sqrt y)^2}{(x_{n-1}-\sqrt y)^2}=\left(\frac{x_{n-1}+\sqrt y}{x_{n-1}-\sqrt y}\right)^2.$$
By recurrence,
$$\frac{x_n+\sqrt y}{x_n-\sqrt y}=\left(\frac{x_{0}+\sqrt y}{x_{0}-\sqrt y}\right)^{2^n}.$$
If you want to achieve $2^{-b}$ relative accuracy, $x_n=(1+2^{-b})\sqrt y$,
$$2^n=\frac{\log_2\frac{(1+2^{-b})\sqrt y+\sqrt y}{(1+2^{-b})\sqrt y-\sqrt y}}{\log_2\left|\frac{x_{0}+\sqrt y}{x_{0}-\sqrt y}\right|},$$
$$n=\log_2\left(\log_2\frac{2+2^{-b}}{2^{-b}}\right)-\log_2\left(\log_2\left|\frac{x_{0}+\sqrt y}{x_{0}-\sqrt y}\right|\right).$$
The first term relates to the desired accuracy. The second is a penalty you pay for providing an inaccurate initial estimate.
If the floating-point representation of $y$ is available, a very good starting approximation is obtained by setting the mantissa to $1$ and halving the exponent (with rounding). This results in an estimate which is at worse a factor $\sqrt 2$ away from the true square root. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914017797636,
"lm_q1q2_score": 0.8547899619877225,
"lm_q2_score": 0.8840392695254319,
"openwebmath_perplexity": 264.4876199650986,
"openwebmath_score": 0.866824746131897,
"tags": null,
"url": "https://math.stackexchange.com/questions/558145/minimum-number-of-iterations-in-newtons-method-to-find-a-square-root"
} |
$$n=\log_2\left(\log_2\left(2^{b+1}+1\right)-\log_2\left(\log_2\frac{\sqrt 2+1}{\sqrt 2-1}\right)\right) \approx\log_2(b+1)-1.35.$$ In the case of single precision (23 bits mantissa), 4 iterations are always enough. For double precision (52 bits), 5 iterations.
On the opposite, if $1$ is used as a start and $y$ is much larger, $\log_2\left|\frac{1+\sqrt y}{1-\sqrt y}\right|$ is close to $\frac{2}{\ln(2)\sqrt y}$ and the formula degenerates to $$n\approx\log_2(b+1)+\log_2(\sqrt y)-1.53.$$ Quadratic convergence is lost as the second term is linear in the exponent of $y$.
If your starting value of $y$ is correctly "scaled" between $1$ and $100$, then a good initial guess is
$X_0 = 1,1545 + 0,11545*y$ (linear approx) or if you prefer a second order approx. you can use:
$X_0 = 1,0 + y*( 0,18 - 0,0009*y)$
Using $X_0$, you need only 3 iterations (Newton / Heron) to get a Relative Error less than 0,0001 % !
Now, here is a fast method to get an excellent $X_0$ (with less than 1% error):
Store a pre-calculated constants table containing all the square roots with rounded 4 decimals in the range $[1 ; 100]$. So $table(i)=sqrt(i)$.
Use Simple Linear Interpolation to get a good $X_0$. Now, the Maximum Relative Error is still greater than 1% ! ( 1.5% near $y=1.4142$).
You can simply divide this Maximum Relative Error (MRE) by a factor of 2 by doing this trick:
Simply replace the 1st entry ($\sqrt(1)=1.0000$ by $\sqrt(1)=1.0075$ ) AND the second entry ( $\sqrt(2)=1.4142$ by $\sqrt(2)=1.4248$ ). That's all. You have a MRE of 0,75%! So, the WORST case is a $X_0$ with only 0.75% in the WHOLE RANGE $[1 ; 100]$.
Unscale your number, and with only ONE Newton / Heron iteration get a 0.0028% MRE in any case, and with two iteration you have a 0.00000004% MRE!!
That's Fast, Simple and very Accurate. Have fun. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.966914017797636,
"lm_q1q2_score": 0.8547899619877225,
"lm_q2_score": 0.8840392695254319,
"openwebmath_perplexity": 264.4876199650986,
"openwebmath_score": 0.866824746131897,
"tags": null,
"url": "https://math.stackexchange.com/questions/558145/minimum-number-of-iterations-in-newtons-method-to-find-a-square-root"
} |
# Expected value of the distance between nodes in a binary tree
If there are 16 leaves in a full binary tree and two nodes $$a$$ and $$b$$ chosen at random, then what is the expected value of the distance between $$a$$ and $$b$$ in T?
My question here is, how do I correctly approach this question?
(Also tell me how to develop numerical skill in algorithm?)
Let's name the nodes $$\mathit{node0}$$ through $$\mathit{node15}$$. (with the implication that that's the order of the nodes)
I would approach this problem first by saying "if $$a$$ is $$\mathit{node0}$$, then what's the expected value of the distance between $$a$$ and $$b$$?" Then I'd ask the question "what if $$a$$ is $$\mathit{node1}$$?", etc. Presumably along the way there would be patterns and symmetries I could make use of so I wouldn't need to do a long calculation 16 times.
So let's take that first question: "if $$a$$ is $$\mathit{node0}$$, then what's the expected value of the distance between $$a$$ and $$b$$?"
First off, the distance between $$\mathit{node0}$$ and $$\mathit{node0}$$ is $$0$$. To $$\mathit{node1}$$, the distance is $$2$$. To nodes $$\mathit{node2}$$ or $$\mathit{node3}$$, the distance is $$4$$. To the four nodes $$\mathit{node4}$$ through $$\mathit{node7}$$, the distance is $$6$$. To the remaining eight nodes the distance is $$8$$.
So the expected distance when $$a$$ is $$\mathit{node0}$$ is $$(0 + 2 + 2*4 + 4*6 + 8*8)/16 = 6.125$$.
Now figure out the expected distance when $$a$$ is $$\mathit{node1}$$, and then other nodes. (there is an obvious pattern - prove it)
Then average over those results to find the answer. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864296722661,
"lm_q1q2_score": 0.8547459721471093,
"lm_q2_score": 0.8633916047011594,
"openwebmath_perplexity": 276.40181810387855,
"openwebmath_score": 0.8741745352745056,
"tags": null,
"url": "https://cs.stackexchange.com/questions/105563/expected-value-of-the-distance-between-nodes-in-a-binary-tree"
} |
Then average over those results to find the answer.
Often, you will find a shorter or more elegant solution to a problem as you are working on it. This is normal, and should not be taken as a sign that your initial approach was wrong or that you should have seen the more elegant approach without trying your initial approach first, any more than a writer should expect to write an essay without first writing some notes or a rough draft.
Unfortunately, though writers are often trained in the idea of rough drafts and revisions, solutions to math or computer science problems are usually only presented in their final form without much indication of the bumbling around and blind corners that really went into discovering the elegant explanation.
• "To $\mathit{node1}$, the distance is $2$. To nodes $\mathit{node2}$ or $\mathit{node3}$, the distance is $4$." how?? Can u draw how node1 getting distance 2 and node2 and node3 both getting distance 4?? – Srestha Mar 14 at 16:23 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864296722661,
"lm_q1q2_score": 0.8547459721471093,
"lm_q2_score": 0.8633916047011594,
"openwebmath_perplexity": 276.40181810387855,
"openwebmath_score": 0.8741745352745056,
"tags": null,
"url": "https://cs.stackexchange.com/questions/105563/expected-value-of-the-distance-between-nodes-in-a-binary-tree"
} |
# How do you calculate the expectation of $\left(\sum_{i=1}^n {X_i} \right)^2$?
If $X_i$ is exponentially distributed $(i=1,...,n)$ with parameter $\lambda$ and $X_i$'s are mutually independent, what is the expectation of
$$\left(\sum_{i=1}^n {X_i} \right)^2$$
in terms of $n$ and $\lambda$ and possibly other constants?
Note: This question has gotten a mathematical answer on http://math.stackexchange.com/q/12068/4051. The readers would take a look at it too.
-
The two copies of this question reference each other and, appropriately, the stats site (here) has a statistical answer and the math site has a mathematical answer. It seems like a good division: let it stand! – whuber Mar 4 '11 at 21:59
If $x_i \sim Exp(\lambda)$, then (under independence), $y = \sum x_i \sim Gamma(n, 1/\lambda)$, so $y$ is gamma distributed (see wikipedia). So, we just need $E[y^2]$. Since $Var[y] = E[y^2] - E[y]^2$, we know that $E[y^2] = Var[y] + E[y]^2$. Therefore, $E[y^2] = n/\lambda^2 + n^2/\lambda^2 = n(1+n)/\lambda^2$ (see wikipedia for the expectation and variance of the gamma distribution).
-
+1 Nice answer! – whuber Nov 27 '10 at 17:10
Thanks. A very neat way of answering the question (leading to the same answer) was also provided on math.stackexchange (link above in the question) a few minutes ago. – Wolfgang Nov 27 '10 at 17:24
The math answer computes the integrals using linearity of expectation. In some ways it's simpler. But I like your solution because it exploits statistical knowledge: because you know a sum of independent Exponential variables has a Gamma distribution, you're done. – whuber Nov 27 '10 at 21:19
I enjoyed it quite a bit and I am by no means a statistician or a mathematician. – Kortuk Nov 29 '10 at 18:04
very elegant answer. – Cyrus S Nov 30 '10 at 16:44
The answer above is very nice and completely answers the question but I will, instead, provide a general formula for the expected square of a sum and apply it to the specific example mentioned here. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864292291072,
"lm_q1q2_score": 0.8547459717644897,
"lm_q2_score": 0.8633916047011595,
"openwebmath_perplexity": 220.5786263429254,
"openwebmath_score": 0.9295454025268555,
"tags": null,
"url": "http://stats.stackexchange.com/questions/4959/how-do-you-calculate-the-expectation-of-left-sum-i-1n-x-i-right2"
} |
For any set of constants $a_1, ..., a_n$ it is a fact that
$$\left( \sum_{i=1}^{n} a_i \right)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} a_{i} a_{j}$$
this is true by the Distributive property and becomes clear when you consider what you're doing when you calculate $(a_1 + ... + a_n) \cdot (a_1 + ... + a_n)$ by hand.
Therefore, for a sample of random variables $X_1, ..., X_n$, regardless of the distributions,
$$E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) = E \left( \sum_{i=1}^{n} \sum_{j=1}^{n} X_i X_j \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j)$$
provided that these expectations exist.
In the example from the problem, $X_1, ..., X_n$ are iid ${\rm exponential}(\lambda)$ random variables, which tells us that $E(X_{i}) = 1/\lambda$ and ${\rm var}(X_i) = 1/\lambda^2$ for each $i$. By independence, for $i \neq j$, we have
$$E(X_i X_j) = E(X_i) \cdot E(X_j) = \frac{1}{\lambda^2}$$
There are $n^2 - n$ of these terms in the sum. When $i = j$, we have
$$E(X_i X_j) = E(X_{i}^{2}) = {\rm var}(X_{i}) + E(X_{i})^2 = \frac{2}{\lambda^2}$$
and there are $n$ of these term in the sum. Therefore, using the formula above,
$$E \left( \sum_{i=1}^{n} X_i \right)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j) = (n^2 - n)\cdot\frac{1}{\lambda^2} + n \cdot \frac{2}{\lambda^2} = \frac{n^2 + n}{\lambda^2}$$
-
This problem is just a special case of the much more general problem of 'moments of moments' which are usually defined in terms of power sum notation. In particular, in power sum notation:
$$s_1 = \sum_{i=1}^{n} X_i$$
Then, irrespective of the distribution, the original poster seeks $E[s_1^2]$ (provided the moments exist). Since the expectations operator is just the 1st Raw Moment, the solution is given in the mathStatica software by:
[ The '___ToRaw' means that we want the solution presented in terms of raw moments of the population (rather than say central moments or cumulants). ]
Finally, if $X$ ~ Exponential($\lambda$) with pdf $f(x)$: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864292291072,
"lm_q1q2_score": 0.8547459717644897,
"lm_q2_score": 0.8633916047011595,
"openwebmath_perplexity": 220.5786263429254,
"openwebmath_score": 0.9295454025268555,
"tags": null,
"url": "http://stats.stackexchange.com/questions/4959/how-do-you-calculate-the-expectation-of-left-sum-i-1n-x-i-right2"
} |
Finally, if $X$ ~ Exponential($\lambda$) with pdf $f(x)$:
f = Exp[-x/λ]/λ; domain[f] = {x, 0, ∞} && {λ > 0};
then we can replace the moments $\mu_i$ in the general solution sol with the actual values for an Exponential random variable, like so:
All done.
P.S. The reason the other solutions posted here yield an answer with $\lambda^2$ in the denominator rather than the numerator is, of course, because they are using a different parameterisation of the Exponential distribution. Since the OP didn't state which version he was using, I decided to use the standard distribution theory textbook definition Johnson Kotz et al … just to balance things out :)
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864292291072,
"lm_q1q2_score": 0.8547459717644897,
"lm_q2_score": 0.8633916047011595,
"openwebmath_perplexity": 220.5786263429254,
"openwebmath_score": 0.9295454025268555,
"tags": null,
"url": "http://stats.stackexchange.com/questions/4959/how-do-you-calculate-the-expectation-of-left-sum-i-1n-x-i-right2"
} |
# Thread: Help with Probability flipping a quarter!!!!
1. ## Help with Probability flipping a quarter!!!!
So here is the problem. Maybe someone can help me. Thanks ahead of time anyone.
John and Judy play a game of flipping a quarter. If it comes up heads, John gets a point and if it comes up tails, Judy gets a point. They each bet $5 to play and the first person to get ten points wins the whole$10. At a point where John has 7 points and Judy has 5 points, the game is interrupted and they can't continue. John thinks he should win since he's ahead while Judy thinks she should win since she'd make a comeback. You are their friend, they agree to let you decide how to split up the money between them. Come up with a split that is most fair to both John and Judy.
2. Originally Posted by 2y4life
So here is the problem. Maybe someone can help me. Thanks ahead of time anyone.
John and Judy play a game of flipping a quarter. If it comes up heads, John gets a point and if it comes up tails, Judy gets a point. They each bet $5 to play and the first person to get ten points wins the whole$10. At a point where John has 7 points and Judy has 3 points, the game is interrupted and they can't continue. John thinks he should win since he's ahead while Judy thinks she should win since she'd make a comeback. You are their friend, they agree to let you decide how to split up the money between them. Come up with a split that is most fair to both John and Judy.
Calculate the probability $\displaystyle p_{John}$ that John would have won from this position.
Calculate the probability $\displaystyle p_{Judy}$ that Judy would have won from this position.
Then give John $\displaystyle \text{\$}10 \, p_{John}$and give Judy$\displaystyle \text{\$}10 \, p_{Judy}$.
3. Originally Posted by mr fantastic
Calculate the probability $\displaystyle p_{John}$ that John would have won from this position.
Calculate the probability $\displaystyle p_{Judy}$ that Judy would have won from this position. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464471055739,
"lm_q1q2_score": 0.8547212139942243,
"lm_q2_score": 0.8757870029950159,
"openwebmath_perplexity": 1301.3342931621264,
"openwebmath_score": 0.5551230311393738,
"tags": null,
"url": "http://mathhelpforum.com/statistics/29269-help-probability-flipping-quarter.html"
} |
Calculate the probability $\displaystyle p_{Judy}$ that Judy would have won from this position.
Then give John $\displaystyle \text{\$}10 p_{John}$and give Judy$\displaystyle \text{\$}10 p_{Judy}$.
Yea...but is there a more simple way of finding that out? I tried and there are too many possibilities. John could flip another head and then flip 5 tails in a row and Judy would win or vice versa.
4. Originally Posted by 2y4life
So here is the problem. Maybe someone can help me. Thanks ahead of time anyone.
John and Judy play a game of flipping a quarter. If it comes up heads, John gets a point and if it comes up tails, Judy gets a point. They each bet $5 to play and the first person to get ten points wins the whole$10. At a point where John has 7 points and Judy has 5 points, the game is interrupted and they can't continue. John thinks he should win since he's ahead while Judy thinks she should win since she'd make a comeback. You are their friend, they agree to let you decide how to split up the money between them. Come up with a split that is most fair to both John and Judy.
This question is historically interesting ..... read this to see why.
5. Originally Posted by 2y4life
Yea...but is there a more simple way of finding that out? I tried and there are too many possibilities. John could flip another head and then flip 5 tails in a row and Judy would win or vice versa.
Let X be the random variable number of points John wins.
Let Y be the random variable number of points Judy wins.
Then:
Pr(John wins from this position) = Pr(X = 3) times Pr(Y < 5).
Pr(Judy wins from this position) = Pr(X < 3) times Pr(Y = 5).
X and Y both follow binomial distributions.
Edit: Actually it's slightly more complicated than this I just realised but I have to rush off now. I'll have more to say later unless someone else gets in first. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464471055739,
"lm_q1q2_score": 0.8547212139942243,
"lm_q2_score": 0.8757870029950159,
"openwebmath_perplexity": 1301.3342931621264,
"openwebmath_score": 0.5551230311393738,
"tags": null,
"url": "http://mathhelpforum.com/statistics/29269-help-probability-flipping-quarter.html"
} |
6. Originally Posted by mr fantastic
This question is historically interesting ..... read this to see why.
Yea, this was an extra credit problem for us to take home and figure out and my TA said that this problem came from two mathematicians named Blaise and Pierre.
I want to put this down as the answer but I'm not sure if this would constitute as a fair answer:
John- 7/12 x $10=$5.83
Judy- 5/12 x $10=$4.17
7. Originally Posted by mr fantastic
Let X be the random variable number of points John wins.
Let Y be the random variable number of points Judy wins.
Then:
Pr(John wins from this position) = Pr(X = 3) times Pr(Y < 5).
Pr(Judy wins from this position) = Pr(X < 3) times Pr(Y = 5).
X and Y both follow binomial distributions.
Edit: Actually it's slightly more complicated than this I just realised but I have to rush off now. I'll have more to say later unless someone else gets in first.
You only need to work out one of John winning or Judy winning (you know why, right).
Probability of John winning: You need to calculate Pr(X = 3) times Pr(Y < 5) for n = 3, 4, 5, 6 and 7.
Probability of Judy winning: You need to calculate Pr(X < 3) times Pr(Y = 5) for n = 5, 6 and 7. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464471055739,
"lm_q1q2_score": 0.8547212139942243,
"lm_q2_score": 0.8757870029950159,
"openwebmath_perplexity": 1301.3342931621264,
"openwebmath_score": 0.5551230311393738,
"tags": null,
"url": "http://mathhelpforum.com/statistics/29269-help-probability-flipping-quarter.html"
} |
# Dimension of vector space, countable, uncountable?
In set theory, when we talk about cardinality of a set we have notions like finite set, countably infinite and uncountably infinite sets.
Main Question
Let's talk about dimension of a vector space. In Linear Algebra I have always heard about two terms either a vector space is finite dimensional for example $\mathbb{R}^n$ or infinite dimensional for example $C[0,1]$.
Why don't we have notions like countably infinite dimensional vector space and uncountably infinite dimensional vector space.
May be, I am not able to see the bigger picture.
Extras
P.S. Long time ago, I was in a talk on Enumerative Algebraic geometry and the professor said, I always think of a positive natural number as a dimension of some vector space.
Don't we have then uncountable dimension vector space?/ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464506036182,
"lm_q1q2_score": 0.8547212123117096,
"lm_q2_score": 0.8757869981319863,
"openwebmath_perplexity": 164.142783292367,
"openwebmath_score": 0.9293712973594666,
"tags": null,
"url": "https://math.stackexchange.com/questions/2915495/dimension-of-vector-space-countable-uncountable"
} |
Don't we have then uncountable dimension vector space?/
• Some infinite-dimensional vector spaces have countable dimension, some have uncountable dimension. The dimension of a vector space is a well-defined cardinality. So, what's the question? – Lord Shark the Unknown Sep 13 '18 at 11:59
• I asked this as I have never heard of terms like countable dimension or uncountable dimension in books. @LordSharktheUnknown – StammeringMathematician Sep 13 '18 at 12:01
• An infinite countable dimmensionnal spaces cannot be complete but they exist. Consider the vector space of real sequences having finite support. – nicomezi Sep 13 '18 at 12:14
• "I have never heard of terms like countable dimension or uncountable dimension in books" --- This is because beginning and intermediate level linear algebra books rarely distinguish more precisely than "finite dimension" and "infinite dimension". It's usually only in graduate level algebra courses (e.g. probably all of the "third level" books in my answer to High-level linear algebra book) where you'll find the various notions of "algebraic dimension" defined as a cardinal number. – Dave L. Renfro Sep 13 '18 at 12:31
• Just because you don't see it in your text doesn't mean it doesn't exist. You're quite right that it makes sense! – Noah Schweber Sep 13 '18 at 12:31
We do have the notions of countable/uncountable dimensions. Just as a set can be finite or infinite (without specifying which infinite cardinality the set as) a vector space can be finite dimensional or infinite dimensional. We can then go one step more and ask, if the dimension is infinite, which infinite cardinal is it? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464506036182,
"lm_q1q2_score": 0.8547212123117096,
"lm_q2_score": 0.8757869981319863,
"openwebmath_perplexity": 164.142783292367,
"openwebmath_score": 0.9293712973594666,
"tags": null,
"url": "https://math.stackexchange.com/questions/2915495/dimension-of-vector-space-countable-uncountable"
} |
The definition of dimension of a vector space is the cardinality of a basis for that vector space (it does not matter which basis you take, because they all have the same cardinality). Then for any cardinal number $\gamma$, you can have a vector space with that dimension. For example, if $\Gamma$ is a set with cardinality $\gamma$, let $c_{00}(\Gamma)$ be the space of all $\mathbb{F}$-valued functions $f$ such that $$\text{supp}(f)=\{x\in \Gamma: f(x)\neq 0\}$$ is finite. Then let $\delta_x\in c_{00}(\Gamma)$ be the function such that $\delta_x(y)=0$ if $y\neq x$ and $\delta_x(y)=1$ if $y=x$. Then $(\delta_x)_{x\in \Gamma}$ is a basis for $c_{00}(\Gamma)$ with cardinality $\gamma$. If $\Gamma=\mathbb{N}$, we have a vector space with countably infinite dimension. If $\Gamma=\mathbb{R}$, we have a vector space with dimension equal to the cardinality of the continuum.
However, for infinite dimensional topological vector spaces (and for infinite dimensional Hilbert and Banach spaces in particular) the usual notion of a basis of limited use. This is because the coordinate functionals for an infinite basis do not interact very well with the topology (one can show that if $(e_i, e^*_i)_{i\in I}$ is a basis together with its coordinate functionals for an infinite dimensional Banach space, then only finitely many of the functionals $e^*_i$ can be continuous). Since the notion of a basis is not as useful in the infinite dimensional topological space case as it is in the finite dimensional case, you can see less emphasis on what the exact dimension is in this case. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464506036182,
"lm_q1q2_score": 0.8547212123117096,
"lm_q2_score": 0.8757869981319863,
"openwebmath_perplexity": 164.142783292367,
"openwebmath_score": 0.9293712973594666,
"tags": null,
"url": "https://math.stackexchange.com/questions/2915495/dimension-of-vector-space-countable-uncountable"
} |
However, in this situation you get into discussions of other types of coordinate systems (such as Schauder bases, FDDs, unconditional bases, etc.), which are different from the notion of an (algebraic) basis. You also can ask about density character instead of dimension, which is the smallest cardinality of a dense subset. This encodes topological information, while the purely algebraic notion of a basis does not. For example, infinite dimensional Hilbert space $\ell_2$ has no countable basis, it does have a countable, dense subset. So the dimension is that of the continuum, but the density character is $\aleph_0$.
The dimension of a vector space is the cardinality of a basis for that vector space. To say that a vector space has finite dimension therefore means that the cardinality of a basis for that vector space is finite. Since finite cardinalities are the same thing as natural numbers, we are safe in saying, for finite dimensional vector spaces, that the dimension is a natural number.
In general, some sets are countably infinite and some sets are uncountably infinite. So, applying this to those sets which happen to be bases of vector spaces, some vector spaces have countably infinite bases and therefore countably infinite dimension, and other vector space have uncountable infinite bases and therefore uncountably infinite dimension.
An example of a vector space over $\mathbb R$ of countably infinite dimension is $\mathbb R^{\infty}$ which is the space of infinite sequences of real numbers such that all but finitely terms in the sequence are equal to $0$. A countably infinite basis consists of $(1,0,0,0,...)$, $(0,1,0,0,...)$, $(0,0,1,0,...)$ and so on.
An example of a vector space over $\mathbb R$ of uncountably infinite dimension is the one you mention in your question, $C[0,1]$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464506036182,
"lm_q1q2_score": 0.8547212123117096,
"lm_q2_score": 0.8757869981319863,
"openwebmath_perplexity": 164.142783292367,
"openwebmath_score": 0.9293712973594666,
"tags": null,
"url": "https://math.stackexchange.com/questions/2915495/dimension-of-vector-space-countable-uncountable"
} |
• That's okay. I do often write things wrong the first time. – Lee Mosher Sep 13 '18 at 12:18
• ... all (but finitely many) terms are zero. – Lee Mosher Sep 13 '18 at 12:19
• Had a hard time understanding your sentence, sorry for incovenience. – nicomezi Sep 13 '18 at 12:22
For the sake of a real world example: The electron in a hydrogen atom can take on countably many states. Each state is a basis vector for the span of possible electron states of hydrogen. There's are bound states and bound states are typically discrete. The relevant Schrodinger Equation also permits scattering states which have a continuous spectrum of possible energy states implying an uncountable vector space spanning the possible scattering states.
The following theorem is an example where we need to distinguish between countably/uncountably infinite dimensional vector spaces. Some important theorems, such as Hilbert Nullstallensatz, can be deduced from it.
Let $$A$$ be an associative, not necessarily commutative, $$\mathbb{C}-$$algebra with unit. For $$a \in A$$ define $$\text{Spec } a = \{\lambda \in \mathbb{C} | a-\lambda \text{ is not invertible}\}$$ Assume that $$A$$ has no more than countable dimension over $$\mathbb{C}$$. Then
(a) If $$A$$ is a division algebra, then $$A=\mathbb{C}$$
(b) For all $$a \in A$$ we have $$\text{Spec } A \neq \emptyset$$; furthermore, $$a \in A$$ is nilpotent if and only if $$\text{Spec } A = \{0\}$$
(Adapted from Representation Theory and Complex Geometry by Chriss and Ginzburg, theorem 2.1.1.)
The proof uses the fact that for any $$a \in A$$, $$\{(a - \lambda)^{-1} | \lambda \in \mathbb{C}\}$$ is an uncountable family of elements of $$A$$. But $$A$$ has only countable dimension over $$\mathbb{C}$$, so they are not linearly indenpendent over $$\mathbb{C}$$.
A weak version of Hilbert Nullstellensatz: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464506036182,
"lm_q1q2_score": 0.8547212123117096,
"lm_q2_score": 0.8757869981319863,
"openwebmath_perplexity": 164.142783292367,
"openwebmath_score": 0.9293712973594666,
"tags": null,
"url": "https://math.stackexchange.com/questions/2915495/dimension-of-vector-space-countable-uncountable"
} |
A weak version of Hilbert Nullstellensatz:
Let $$A$$ be a finitely generated commutative algebra over $$\mathbb{C}$$. Then any maximal ideal of $$A$$ is the kernel of an algebra homomorphism $$A \to \mathbb{C}$$
This follows directly from the above theorem: $$A$$ finitely generated $$\Longrightarrow$$ $$A$$ has countable dimension over $$\mathbb{C}$$ $$\Longrightarrow$$ $$A/\mathfrak{m}$$ has countable dimension over $$\mathbb{C}$$ $$\Longrightarrow$$ $$A/\mathfrak{m}=\mathbb{C}$$
(We can also deduce the strong version of Nullstellensatz from it but need more argument.)
• Welcome the Mathematics Stack Exchange! A quick tour of the site (math.stackexchange.com/tour) will help you get the most of your time here. – dantopa Dec 28 '18 at 4:57 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464506036182,
"lm_q1q2_score": 0.8547212123117096,
"lm_q2_score": 0.8757869981319863,
"openwebmath_perplexity": 164.142783292367,
"openwebmath_score": 0.9293712973594666,
"tags": null,
"url": "https://math.stackexchange.com/questions/2915495/dimension-of-vector-space-countable-uncountable"
} |
## anonymous one year ago HELP ME PLEASE Which of the following exponential functions goes through the points (1, 6) and (2, 12)? f(x) = 3(2)x f(x) = 2(3)x f(x) = 3(2)−x f(x) = 2(3)−x
1. ybarrap
Plug in x=1 and see if it equals 6 Plug in x=2 and see if it equals 12 Make sense?
2. anonymous
not really? can you guide me through it? (xD I dont want to be an answer-hogger/wanter.. I dont want the answer, just explanations :) ) @freckles
3. anonymous
Do you understand that coordinates are in the form (x,y)?
4. anonymous
yes. @BMF96
5. anonymous
f(x) = y
6. anonymous
What don't you understand?
7. jim_thompson5910
If $\Large f(x) = 7(3)^x$ (for example), then what is the value of f(x) when x = 2? In other words, what is f(2) equal to?
8. anonymous
f(2)=441?? #CONFUSED
9. jim_thompson5910
Replace each x with 2 $\Large f(x) = 7(3)^x$ $\Large f(2) = 7(3)^2$ $\Large f(2) = 7(9)$ $\Large f(2) = 63$ Do you see how I got f(2) to be equal to 63?
10. jim_thompson5910
btw you square first and then you multiply
11. anonymous
ok, I forgot that rule :) (like DUHR, Bella, get a grip)
12. jim_thompson5910
Since $$\Large f({\color{red}{2}}) = {\color{blue}{63}}$$ from my example, this means the point $$\Large ({\color{red}{x}},{\color{blue}{y}})=({\color{red}{2}},{\color{blue}{63}})$$ lies on the function curve of f(x)
13. anonymous
okayyy....
14. anonymous
$$f(x)=y= \begin{cases} 3(2)^x\\ 2(3)^x\\ 3(2)^{-x}\\ 2(3)^{-x} \end{cases}\qquad \qquad \begin{array}{llll} x&y \\\hline\\ {\color{brown}{ 1}}&3(2)^{\color{brown}{ 1}}\\ &2(3)^{\color{brown}{ 1}}\\ &3(2)^{-{\color{brown}{ 1}}}\\ &2(3)^{-{\color{brown}{ 1}}}\\ {\color{brown}{ 2}}&3(2)^{\color{brown}{ 2}}\\ &2(3)^{\color{brown}{ 2}}\\ &3(2)^{-{\color{brown}{ 2}}}\\ &2(3)^{-{\color{brown}{ 2}}} \end{array}$$
15. jim_thompson5910 | {
"domain": "openstudy.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464474553783,
"lm_q1q2_score": 0.8547212079725032,
"lm_q2_score": 0.8757869965109764,
"openwebmath_perplexity": 1359.5582736869017,
"openwebmath_score": 0.6945479512214661,
"tags": null,
"url": "http://openstudy.com/updates/55a2f823e4b05670bbb5452a"
} |
15. jim_thompson5910
so what ybarrap said at the top, you plug in each x coordinate into each function and see if you get the correct corresponding y coordinates Let's say we pick choice B at random $\Large f(x) = 2(3)^x$ The first point is (1,6). To test if this point lies on the function f(x) curve, we plug in x = 1 and see if y = 6 pops out $\Large f(x) = 2(3)^x$ $\Large f(1) = 2(3)^1$ $\Large f(1) = 2(3)$ $\Large f(1) = 6$ It does, so (1,6) is definitely on this curve. How about (2,12)? Let's check $\Large f(x) = 2(3)^x$ $\Large f(2) = 2(3)^2$ $\Large f(2) = 2(9)$ $\Large f(2) = 18$ Nope. The point (2,18) actually lies on this function curve and NOT (2,12). So we can rule out choice B.
16. jim_thompson5910
So what just happened was that I've proven that the function $$\Large f(x) = 2(3)^x$$ does NOT go through both points (1,6) and (2,12).
17. anonymous
okay, so its A, C, or D lol.... so lets try to rule out A...
18. anonymous
@jim_thompson5910
19. jim_thompson5910
what did you get so far in checking choice A?
20. anonymous
that A is, in fact NOT the answer!?
21. jim_thompson5910
if x = 1, then what is f(1) ?
22. anonymous
f
23. anonymous
Wait, so it IS A!!!!!
24. jim_thompson5910
$\Large f(x) = 3(2)^x$ $\Large f(1) = 3(2)^1$ $\Large f(1) = \underline{ \ \ \ \ \ \ \ } \text{ (fill in the blank)}$
25. anonymous
6
26. jim_thompson5910
27. anonymous
12
28. jim_thompson5910
Good. Choice A is definitely the answer. As practice, why not go through C and D and eliminate them. With choice C, if x = 1, then what is f(x) equal to? | {
"domain": "openstudy.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464474553783,
"lm_q1q2_score": 0.8547212079725032,
"lm_q2_score": 0.8757869965109764,
"openwebmath_perplexity": 1359.5582736869017,
"openwebmath_score": 0.6945479512214661,
"tags": null,
"url": "http://openstudy.com/updates/55a2f823e4b05670bbb5452a"
} |
# Simulating random variables from a discrete distribution
I have the following discrete distribution where $p$ is a known constant:
$p(x,p)= \frac{(1-p)^3}{p(1+p)}x^2p^x , (0<p<1), x=0, 1, 2, \ldots$ .
How can I sample from this distribution?
– Tim
May 23, 2016 at 15:03
• Does this even sum to 1 for some value of $p$? (I doubt so) May 23, 2016 at 15:19
• @Elvis It does sum to 1. May 23, 2016 at 15:41
• @MatthewGunn My bad. May 24, 2016 at 6:43
This answer develops a simple procedure to generate values from this distribution. It illustrates the procedure, analyzes its scope of application (that is, for which $p$ it might be considered a practical method), and provides executable code.
### The Idea
Because
$$x^2 = 2\binom{x}{2} + \binom{x}{1},$$
consider the distributions $f_{p;m}$ given by
$$f_{p;m}(x) \propto \binom{x}{m-1}p^x$$
for $m=3$ and $m=2$.
A recent thread on inverse sampling demonstrates that these distributions count the number of observations of independent Bernoulli$(1-p)$ variables needed before first seeing $m$ successes, with $x+1$ equal to that number. It also shows that the normalizing constant is
$$C(p;m)=\sum_{x=m-1}^\infty \binom{x}{m-1}p^x = \frac{p^{m-1}}{(1-p)^m}.$$
Consider the probabilities in the question,
$$x^2 p^x = \left( 2\binom{x}{2} + \binom{x}{1} \right)p^x = 2 \binom{x}{2}p^x + \binom{x}{1} p^x =2 C(p;3) f_{p;3}(x) + C(p;2) f_{p;2}(x).$$
Consequently, the given distribution is a mixture of $f_{p;3}$ and $f_{p;2}$. The proportions are as $$2C(p;3):C(p;2) = 2p:(1-p).$$ It is simple to sample from a mixture: generate an independent uniform variate $u$ and draw $x$ from $f_{p;2}$ when $u \lt (1-p)/(2p+1-p)$; that is, when $u(1+p) \lt 1-p$, and otherwise draw $x$ from $f_{p;3}$.
(It is evident that this method generalizes: many probability distributions where the chance of $x$ is of the form $P(x)p^x$ for a polynomial $P$, such as $P(x)=x^2$ here, can be expressed as a mixture of these inverse-sampling distributions.) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464443071381,
"lm_q1q2_score": 0.8547212052153154,
"lm_q2_score": 0.8757869965109765,
"openwebmath_perplexity": 671.7193666075293,
"openwebmath_score": 0.7729891538619995,
"tags": null,
"url": "https://stats.stackexchange.com/questions/214127/simulating-random-variables-from-a-discrete-distribution"
} |
### The Algorithm
These considerations lead to the following simple algorithm to generate one realization of the desired distribution:
Let U ~ Uniform(0,1+p)
If (U < 1-p) then m = 2 else m = 3
x = 0
While (m > 0) {
x = x + 1
Let Z ~ Bernoulli(1-p)
m = m - Z
}
Return x-1
These histograms show simulations (based on 100,000 iterations) and the true distribution for a range of values of $p$.
### Analysis
How efficient is this? The expectation of $x+1$ under the distribution $f_{p;m}$ is readily computed; it equals $m/(1-p)$. Therefore the expected number of trials (that is, values of Z to generate in the algorithm) is
$$\left((1-p) \frac{2}{1-p} + (2p) \frac{3}{1-p}\right) / (1-p+2p) = 2 \frac{1+2p}{1-p^2}.$$
Add one more for generating U. The total is close to $3$ for small values of $p$. As $p$ approaches $1$, this count asymptotically is
$$1 + 2\frac{1 + 2p}{(1-p)(1+p)} \approx \frac{3}{1-p}.$$
This shows us that the algorithm will, on the average, be reasonably quick for $p \lt 2/3$ (taking up to ten easy steps) and not too bad for $p \lt 0.97$ (taking under a hundred steps).
### Code
Here is the R code used to implement the algorithm and produce the figures. A $\chi^2$ test will show that the simulated results do not differ significantly from the expected frequencies.
sample <- function(p) {
m <- ifelse(runif(1, max=1+p) < 1-p, 2, 3)
x <- 0
while (m > 0) {
x <- x + 1
m <- m - (runif(1) > p)
}
return(x-1)
}
n <- 1e5
set.seed(17)
par(mfcol=c(2,3))
for (p in c(1/5, 1/2, 9/10)) {
# Simulate and summarize.
x <- replicate(n, sample(p))
y <- table(x)
# Compute the true distribution for comparison.
k <- as.numeric(names(y))
theta <- sapply(k, function(i) i^2 * p^i) * (1-p)^3 / (p^2 + p)
names(theta) <- names(y)
# Plot both.
barplot(y/n, main=paste("Simulation for", format(p, digits=2)),
border="#00000010")
barplot(theta, main=paste("Distribution for", format(p, digits=2)),
border="#00000010")
} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464443071381,
"lm_q1q2_score": 0.8547212052153154,
"lm_q2_score": 0.8757869965109765,
"openwebmath_perplexity": 671.7193666075293,
"openwebmath_score": 0.7729891538619995,
"tags": null,
"url": "https://stats.stackexchange.com/questions/214127/simulating-random-variables-from-a-discrete-distribution"
} |
@dsaxton's approach is known as inverse transform sampling and is probably the way to go for a problem like this. To be a bit more explicit, the approach is:
1. Draw $u$ from uniform distribution on (0,1).
2. Compute $x = F^{-1}(u)$ where $F^{-1}$ is the inverse of the cumulative distribution function.
Computing $x = F^{-1}(u)$ is equivalent to finding the integer $x$ that is the solution to: $$\text{minimize} \quad x \quad \text{subject to} \sum_{j=0}^x \frac{(1 - p)^3}{p(1+p)} j^2p^j \geq u$$
Quick pseudo code to do this numerically:
1. Construct a vector $\boldsymbol{m}$ such that $m_j = \frac{(1 - p)^3}{p(1+p)} j^2p^j$.
2. Create a vector $\boldsymbol{c}$ such that $c_j = \sum_{k=0}^j m_j$.
3. Find the minimum index $x$ such that $c_x \geq u$.
Draw $u$ from a uniform$(0, 1)$ distribution and let $x$ be the smallest value of $k$ for which $\sum_{j=0}^{k} \frac{(1 - p)^3}{p (1 + p)} j^2 p^j > u$. Then $x$ will be a realization from the desired distribution.
• May be clearer to write $x = \min k \text{ subject to } \sum_j^k \ldots > u$. As written, it's not terribly clear what is being minimized and that $x$ equals the optimal $k$. May 23, 2016 at 16:10 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464443071381,
"lm_q1q2_score": 0.8547212052153154,
"lm_q2_score": 0.8757869965109765,
"openwebmath_perplexity": 671.7193666075293,
"openwebmath_score": 0.7729891538619995,
"tags": null,
"url": "https://stats.stackexchange.com/questions/214127/simulating-random-variables-from-a-discrete-distribution"
} |
# Notation for cartesian product except one set?
Let's say I have a list of sets $S_i$, for $i=1,\ldots,n$. We often write the cartesian product of all these sets, with the exception of $S_k$ as:
$$S=S_1\times\cdots\times S_{k-1}\times S_{k+1}\times\cdots\times S_n$$
Is there a more succinct way to write it?
In Wikipedia (https://en.wikipedia.org/wiki/Cartesian_product), I found something, which might be what you are looking for: $\prod_{n=1}^k \Bbb{R} = \Bbb{R}\times \Bbb{R} \times\cdots\times \Bbb{R} = \Bbb{R}^k$. So maybe something like this one is also valid: $$\prod_{\scriptstyle i = 1\atop\scriptstyle i \ne k}^nS_i$$
where $S_i$ is the $i^\text{th}$ set of the list you mentioned.
• Possibly \prod_{i = 1\atop i \ne k}^n would be better? (with \atop separating lines instead of a comma) – CiaPan Dec 14 '17 at 11:15
• Yes, you are right. Thank you for the formatting :) – ArsenBerk Dec 14 '17 at 11:16
• I changed $\Bbb{R}$ x $\Bbb{R}$ x ... x $\Bbb{R}$ to $\Bbb{R} \times \Bbb{R} \times \cdots \times \Bbb{R}.$ That is proper MathJax usage. $\qquad$ – Michael Hardy Dec 14 '17 at 13:13
• In addition to the atop business, another way to notate the indices would be with $\in$. I could see that being especially convenient if these indices are already in a set or are repeated throughout the work. If $E=\{1,\cdots,n\}\setminus\{k\}$ then you can have $$\prod_{i\in E}S_i$$ – gen-ℤ ready to perish Dec 14 '17 at 13:42
• Another formatting comment: if you look carefully, your subscripts on the product symbol are in a smaller font than the superscript. IMHO it's better to avoid this by using \prod_{\scriptstyle i = 1\atop\scriptstyle i \ne k}^n: compare$$\prod_{i = 1\atop i \ne k}^nS_i\quad\hbox{and}\quad \prod_{\scriptstyle i = 1\atop\scriptstyle i \ne k}^nS_i$$ – David Dec 15 '17 at 1:36 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464492044004,
"lm_q1q2_score": 0.8547212000121607,
"lm_q2_score": 0.8757869867849167,
"openwebmath_perplexity": 314.3137492617829,
"openwebmath_score": 0.8885844945907593,
"tags": null,
"url": "https://math.stackexchange.com/questions/2566206/notation-for-cartesian-product-except-one-set/2566431"
} |
I have seen a notation for this kind of construction during some of my math lectures (but can't find a reference right now). This was mostly in the context of differential forms (e.g. interior product with vector), but can be applied to your case: $$S_1\times \dotsm \times \widehat{S_k} \times\dotsm \times S_n := S_1\times \dotsm \times S_{k-1}\times S_{k+1} \times \dotsm S_n$$ The hat denotes the factor to be omitted. Note that this is not a universally standard notation, so even the professors that used it defined it at some point early in the lecture.
• This is what I would use, along with a parenthetical remark along the lines of "where the hat denotes the factor to be omitted." I very seldom see anything as formal as in the other answers. – Matthew Leingang Dec 14 '17 at 15:00
• Hatcher uses a notation similar to this in his text Algebraic Topology (see page 105 for an example). – Xander Henderson Dec 17 '17 at 2:52
In general, we can write
$$S_1 \times \dots \times S_n := \prod_{i=1}^n S_i$$
and then we can apply all conventions we are used to.
As for your question, this can be written as:
$$\prod_{i = 1 \atop i \neq k}^n S_i$$
Although it might not be common in set theory, it is common for game theorists to write $S_{-i}$ for $S_1 \times \cdots \times S_{i-1} \times S_{i+1} \times \cdots \times S_n$. See page 15 of chapter one of Osborne and Rubinstein's text on game theory, for example.
That notation is useful in game theory because, if $S_j$ represents the set of strategies available to player $j$, then one often needs to describe how all players except player $i$ have acted. Such a description will be a member of $S_i$. In particular, the notation becomes useful in defining a Nash equilibrium. See https://en.wikipedia.org/wiki/Nash_equilibrium. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464492044004,
"lm_q1q2_score": 0.8547212000121607,
"lm_q2_score": 0.8757869867849167,
"openwebmath_perplexity": 314.3137492617829,
"openwebmath_score": 0.8885844945907593,
"tags": null,
"url": "https://math.stackexchange.com/questions/2566206/notation-for-cartesian-product-except-one-set/2566431"
} |
2017) and the quantum hierarchical clustering algorithm based on quantum Euclidean estimator (Kong, Lai, and Xiong 2017) has been implemented. Applying the formula given above we get that: \begin{align} d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{w} +\vec{w} - \vec{v} \| \\ d(\vec{u}, \vec{v}) = \| (\vec{u} - \vec{w}) + (\vec{w} - \vec{v}) \| \\ d(\vec{u}, \vec{v}) \leq || (\vec{u} - \vec{w}) || + || (\vec{w} - \vec{v}) \| \\ d(\vec{u}, \vec{v}) \leq d(\vec{u}, \vec{w}) + d(\vec{w}, \vec{v}) \quad \blacksquare \end{align}, \begin{align} d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{(2-1)^2 + (3+2)^2 + (4-1)^2 + (2-3)^2} \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{1 + 25 + 9 + 1} \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{36} \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = 6 \end{align}, Unless otherwise stated, the content of this page is licensed under. . The reason for this is because whatever the values of the variables for each individual, the standardized values are always equal to 0.707106781 ! The Euclidean distance between two random points [ x 1 , x 2 , . Determine the Euclidean distance between. The Pythagorean Theorem can be used to calculate the distance between two points, as shown in the figure below. The distance between two vectors v and w is the length of the difference vector v - w. There are many different distance functions that you will encounter in the world. In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. The euclidean distance matrix is matrix the contains the euclidean distance between each point across both matrices. A generalized term for the Euclidean norm is the L2 norm or L2 distance. Append content without editing the whole page source. u = < -2 , 3> . Solution. Wikidot.com Terms of Service - what you can, what you should not etc. The Euclidean distance between two | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
Terms of Service - what you can, what you should not etc. The Euclidean distance between two points in either the plane or 3-dimensional space measures the length of a segment connecting the two In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" straight-line distance between two points in Euclidean space. Otherwise, columns that have large values will dominate the distance measure. pdist2 is an alias for distmat, while pdist(X) is … You are most likely to use Euclidean distance when calculating the distance between two rows of data that have numerical values, such a floating point or integer values. If you want to discuss contents of this page - this is the easiest way to do it. If columns have values with differing scales, it is common to normalize or standardize the numerical values across all columns prior to calculating the Euclidean distance. To calculate the Euclidean distance between two vectors in Python, we can use the numpy.linalg.norm function: Applying the formula given above we get that: (2) \begin {align} d (\vec {u}, \vec {v}) = \| \vec {u} - \vec {v} \| = \sqrt { (2-1)^2 + (3+2)^2 + (4-1)^2 + (2-3)^2} \\ d (\vec {u}, \vec {v}) = \| \vec {u} - \vec {v} \| = \sqrt {1 + 25 + 9 + 1} \\ d (\vec {u}, \vec {v}) = \| \vec {u} - \vec {v} \| = \sqrt {36} \\ d (\vec {u}, \vec {v}) = \| \vec {u} - \vec {v} \| = 6 … Find out what you can do. 1 Suppose that d is very large. ml-distance-euclidean. Check out how this page has evolved in the past. By using this metric, you can get a sense of how similar two documents or words are. Computes the Euclidean distance between a pair of numeric vectors. View wiki source for this page without editing. A generalized term for the Euclidean norm is the L2 norm or L2 distance. Recall that the squared Euclidean distance between any two vectors a and b is simply the sum of the square component-wise differences. View and manage file attachments for this page. With this distance, Euclidean space becomes | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
View and manage file attachments for this page. With this distance, Euclidean space becomes a metric space. Older literature refers to the metric as the Pythagorean metric. $d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{(u_1 - v_1)^2 + (u_2 - v_2)^2 ... (u_n - v_n)^2}$, $d(\vec{u}, \vec{v}) = d(\vec{v}, \vec{u})$, $d(\vec{u}, \vec{v}) = || \vec{u} - \vec{v} || = \sqrt{(u_1 - v_1)^2 + (u_2 - v_2)^2 ... (u_n - v_n)^2}$, $d(\vec{v}, \vec{u}) = || \vec{v} - \vec{u} || = \sqrt{(v_1 - u_1)^2 + (v_2 - u_2)^2 ... (v_n - u_n)^2}$, $(u_i - v_i)^2 = u_i^2 - 2u_iv_i + v_i^2 = v_i^2 - 2u_iv_i + 2u_i^2 = (v_i - u_i)^2$, $\vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^n$, $d(\vec{u}, \vec{v}) \leq d(\vec{u}, \vec{w}) + d(\vec{w}, \vec{v})$, Creative Commons Attribution-ShareAlike 3.0 License. The primary takeaways here are that the Euclidean distance is basically the length of the straight line that's connects two vectors. 3.8 Digression on Length and Distance in Vector Spaces. Using our above cluster example, we’re going to calculate the adjusted distance between a … Euclidean metric is the “ordinary” straight-line distance between two points. Euclidean distance between two vectors, or between column vectors of two matrices. , y d ] is radicaltp radicalvertex radicalvertex radicalbt d summationdisplay i =1 ( x i − y i ) 2 Here, each x i and y i is a random variable chosen uniformly in the range 0 to 1. This victory. The corresponding loss function is the squared error loss (SEL), and places progressively greater weight on larger errors. The shortest path distance is a straight line. And now we can take the norm. Computes the Euclidean distance between a pair of numeric vectors. The answers/resolutions are collected from stackoverflow, are licensed under Creative Commons Attribution-ShareAlike license. We will now look at some properties of the distance between points in $\mathbb{R}^n$. It corresponds to the L2-norm of the difference between the two vectors. The Euclidean | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
It corresponds to the L2-norm of the difference between the two vectors. The Euclidean distance between 1-D arrays u and v, is defined as maximum: Maximum distance between two components of x and y (supremum norm) manhattan: Absolute distance between the two vectors (1 … I have the two image values G= [1x72] and G1 = [1x72]. And these is the square root off 14. A little confusing if you're new to this idea, but it … D = √ [ ( X2-X1)^2 + (Y2-Y1)^2) Where D is the distance. We determine the distance between the two vectors. if p = (p1, p2) and q = (q1, q2) then the distance is given by. u of the two vectors. Notify administrators if there is objectionable content in this page. Solution to example 1: v . is: Deriving the Euclidean distance between two data points involves computing the square root of the sum of the squares of the differences between corresponding values. ... Percentile. u, is v . Let’s assume OA, OB and OC are three vectors as illustrated in the figure 1. It can be computed as: A vector space where Euclidean distances can be measured, such as , , , is called a Euclidean vector space. General Wikidot.com documentation and help section. The squared Euclidean distance is therefore d(x SquaredEuclideanDistance is equivalent to the squared Norm of a difference: The square root of SquaredEuclideanDistance is EuclideanDistance : Variance as a SquaredEuclideanDistance from the Mean : Euclidean distance, Euclidean distance. . So there is a bias towards the integer element. The following formula is used to calculate the euclidean distance between points. View/set parent page (used for creating breadcrumbs and structured layout). Two squared, lost three square until as one. $\vec {u} = (2, 3, 4, 2)$. I need to calculate the two image distance value. The distance between two points is the length of the path connecting them. Source: R/L2_Distance.R Quickly calculates and returns the Euclidean distances between m vectors in one set and n vectors in another. | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
and returns the Euclidean distances between m vectors in one set and n vectors in another. {\displaystyle \left\|\mathbf {a} \right\|= {\sqrt {a_ {1}^ {2}+a_ {2}^ {2}+a_ {3}^ {2}}}} which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors. Most vector spaces in machine learning belong to this category. Accepted Answer: Jan Euclidean distance of two vector. By using this formula as distance, Euclidean space becomes a metric space. $\vec {v} = (1, -2, 1, 3)$. Each set of vectors is given as the columns of a matrix. Installation $npm install ml-distance-euclidean. Euclidean distance.$\begingroup$Even in infinitely many dimensions, any two vectors determine a subspace of dimension at most$2$: therefore the (Euclidean) relationships that hold in two dimensions among pairs of vectors hold entirely without any change at all in any number of higher dimensions, too. Squared Euclidean Distance, Let x,yâRn. This system utilizes Locality sensitive hashing (LSH) [50] for efficient visual feature matching.$\endgroup$– whuber ♦ Oct 2 '13 at 15:23 The formula for this distance between a point X ( X 1 , X 2 , etc.) In simple terms, Euclidean distance is the shortest between the 2 points irrespective of the dimensions. Y = cdist(XA, XB, 'sqeuclidean') The Euclidean distance between two vectors, A and B, is calculated as: Euclidean distance = √ Σ(A i-B i) 2. You want to find the Euclidean distance between two vectors. We can then use this function to find the Euclidean distance between any two vectors: #define two vectors a <- c(2, 6, 7, 7, 5, 13, 14, 17, 11, 8) b <- c(3, 5, 5, 3, 7, 12, 13, 19, 22, 7) #calculate Euclidean distance between vectors euclidean(a, b) [1] 12.40967 The Euclidean distance between the two vectors turns out to be 12.40967. Computing the Distance Between Two Vectors Problem. Find the Distance Between Two Vectors if the Lengths and the Dot , Let a and b be n-dimensional vectors with length 1 and the inner | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
if the Lengths and the Dot , Let a and b be n-dimensional vectors with length 1 and the inner product of a and b is -1/2. — Page 135, D… w 1 = [ 1 + i 1 â i 0], w 2 = [ â i 0 2 â i], w 3 = [ 2 + i 1 â 3 i 2 i]. In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two (geometry) The distance between two points defined as the square root of the sum of the squares of the differences between the corresponding coordinates of the points; for example, in two-dimensional Euclidean geometry, the Euclidean distance between two points a = (a x, a y) and b = (b x, b y) is defined as: What does euclidean distance mean?, In the spatial power covariance structure, unequal spacing is measured by the Euclidean distance d ⢠j j â² , defined as the absolute difference between two In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. So the norm of the vector to three minus one is just the square root off. = v1 u1 + v2 u2 NOTE that the result of the dot product is a scalar. It is the most obvious way of representing distance between two points. We here use "Euclidean Distance" in which we have the Pythagorean theorem. The result is a positive distance value. (we are skipping the last step, taking the square root, just to make the examples easy) ‖ a ‖ = a 1 2 + a 2 2 + a 3 2. Euclidean distance. We will derive some special properties of distance in Euclidean n-space thusly. The associated norm is called the Euclidean norm. The average distance between a pair of points is 1/3. First, determine the coordinates of point 1. Active 1 year, 1 month ago. And that to get the Euclidean distance, you have to calculate the norm of the difference between the vectors that you are comparing. . Definition of normalized Euclidean distance, According to Wolfram Alpha, and the following answer from cross validated, the | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
Euclidean distance, According to Wolfram Alpha, and the following answer from cross validated, the normalized Eucledean distance is defined by: enter image In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" straight-line distance between two points in Euclidean space. This is helpful variables, the normalized Euclidean distance would be 31.627. So this is the distance between these two vectors. Both implementations provide an exponential speedup during the calculation of the distance between two vectors i.e. These names come from the ancient Greek mathematicians Euclid and Pythagoras, although Euclid did not represent distances as numbers, and the connection from the Pythagorean theorem to distance calculation wa Y1 and Y2 are the y-coordinates. Copyright ©document.write(new Date().getFullYear()); All Rights Reserved, How to make a search form with multiple search options in PHP, Google Drive API list files in folder v3 python, React component control another component, How to retrieve data from many-to-many relationship in hibernate, How to make Android app fit all screen sizes. Discussion. Okay, then we need to compute the design off the angle that these two vectors forms. Given some vectors$\vec{u}, \vec{v} \in \mathbb{R}^n$, we denote the distance between those two points in the following manner. Euclidean Distance Formula. <4 , 6>. sample 20 1 0 0 0 1 0 1 0 1 0 0 1 0 0 The squared Euclidean distance sums the squared differences between these two vectors: if there is an agreement (there are two matches in this example) there is zero sum of squared differences, but if there is a discrepancy there are two differences, +1 and –1, which give a sum of squares of 2. X1 and X2 are the x-coordinates. The Euclidean distance between two points in either the plane or 3-dimensional space measures the length of a segment connecting the two points. With this distance, Euclidean space becomes a metric space. Watch headings for an "edit" link when | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
With this distance, Euclidean space becomes a metric space. Watch headings for an "edit" link when available. As such, it is also known as the Euclidean norm as it is calculated as the Euclidean distance from the origin. Euclidean Distance. Click here to edit contents of this page. For three dimension 1, formula is. Determine the Euclidean distance between$\vec{u} = (2, 3, 4, 2)$and$\vec{v} = (1, -2, 1, 3)$. This process is used to normalize the features Now I would like to compute the euclidean distance between x and y. I think the integer element is a problem because all other elements can get very close but the integer element has always spacings of ones. Brief review of Euclidean distance. Euclidean distance, Euclidean distances, which coincide with our most basic physical idea of squared distance between two vectors x = [ x1 x2 ] and y = [ y1 y2 ] is the sum of The Euclidean distance function measures the âas-the-crow-fliesâ distance. In this article to find the Euclidean distance, we will use the NumPy library. Change the name (also URL address, possibly the category) of the page. Basic Examples (2) Euclidean distance between two vectors: Euclidean distance between numeric vectors: If not passed, it is automatically computed. Euclidean distance Suppose w 4 is [â¦] Construction of a Symmetric Matrix whose Inverse Matrix is Itself Let v be a nonzero vector in R n . The associated norm is called the Euclidean norm. Usage EuclideanDistance(x, y) Arguments x. Numeric vector containing the first time series. Glossary, Freebase(1.00 / 1 vote)Rate this definition: Euclidean distance. Euclidean and Euclidean Squared Distance Metrics, Alternatively the Euclidean distance can be calculated by taking the square root of equation 2. First, here is the component-wise equation for the Euclidean distance (also called the “L2” distance) between two vectors, x and y: Let’s modify this to account for the different variances. The standardized Euclidean distance between two | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
modify this to account for the different variances. The standardized Euclidean distance between two n-vectors u and v is $\sqrt{\sum {(u_i-v_i)^2 / V[x_i]}}.$ V is the variance vector; V[i] is the variance computed over all the i’th components of the points. Let’s discuss a few ways to find Euclidean distance by NumPy library. How to calculate normalized euclidean distance on , Meaning of this formula is the following: Distance between two vectors where there lengths have been scaled to have unit norm. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore occasionally being called the Pythagorean distance. u = < v1 , v2 > . See pages that link to and include this page. and. This library used for manipulating multidimensional array in a very efficient way. Available distance measures are (written for two vectors x and y): euclidean: Usual distance between the two vectors (2 norm aka L_2), sqrt(sum((x_i - y_i)^2)). and a point Y ( Y 1 , Y 2 , etc.) their Dot Product of Two Vectors The dot product of two vectors v = < v1 , v2 > and u = denoted v . I've been reading that the Euclidean distance between two points, and the dot product of the Dot Product, Lengths, and Distances of Complex Vectors For this problem, use the complex vectors. In a 3 dimensional plane, the distance between points (X 1 , … Sometimes we will want to calculate the distance between two vectors or points. The points are arranged as m n -dimensional row vectors in the matrix X. Y = cdist (XA, XB, 'minkowski', p) (Zhou et al. gives the Euclidean distance between vectors u and v. Details. Compute the euclidean distance between two vectors. Understand normalized squared euclidean distance?, Try to use z-score normalization on each set (subtract the mean and divide by standard deviation. Ask Question Asked 1 year, 1 month ago. Before using various cluster programs, the proper data treatment isâ Squared Euclidean distance is of central importance in | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
programs, the proper data treatment isâ Squared Euclidean distance is of central importance in estimating parameters of statistical models, where it is used in the method of least squares, a standard approach to regression analysis. Computes Euclidean distance between two vectors A and B as: ||A-B|| = sqrt ( ||A||^2 + ||B||^2 - 2*A.B ) and vectorizes to rows of two matrices (or vectors). With this distance, Euclidean space becomes a metric space. Older literature refers to the metric as the Pythagorean metric. Something does not work as expected? Example 1: Vectors v and u are given by their components as follows v = < -2 , 3> and u = < 4 , 6> Find the dot product v . . The Euclidean distance between two points in either the plane or 3-dimensional space measures the length of a segment connecting the two In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" straight-line distance between two points in Euclidean space. The points A, B and C form an equilateral triangle. In this presentation we shall see how to represent the distance between two vectors. , x d ] and [ y 1 , y 2 , . API Euclidean Distance Between Two Matrices. The Euclidean distance d is defined as d(x,y)=ânâi=1(xiâyi)2. . Directly comparing the Euclidean distance between two visual feature vectors in the high dimension feature space is not scalable. linear-algebra vectors. How to calculate euclidean distance. The associated norm is called the Euclidean norm. Click here to toggle editing of individual sections of the page (if possible). Euclidean distancecalculates the distance between two real-valued vectors. The length of the vector a can be computed with the Euclidean norm. In ℝ, the Euclidean distance between two vectors and is always defined. ||v||2 = sqrt(a1² + a2² + a3²) scipy.spatial.distance.euclidean¶ scipy.spatial.distance.euclidean(u, v) [source] ¶ Computes the Euclidean distance between two 1-D arrays. Compute distance between each pair of the two Y = cdist | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
Euclidean distance between two 1-D arrays. Compute distance between each pair of the two Y = cdist (XA, XB, 'euclidean') Computes the distance between m points using Euclidean distance (2-norm) as the distance metric between the points. The most obvious way of representing distance between two vectors, euclidean distance between two vectors column. ) =ânâi=1 ( xiâyi ) 2 need to compute the design off the angle that these two vectors v... Get a sense of how similar two documents or words are are collected from stackoverflow, are licensed under Commons! Library used for creating breadcrumbs and structured layout ) squared Euclidean distance between vectors... Set ( subtract the mean and divide by standard deviation easiest way to do it glossary, (! Calculation of the page ( if possible ) be 31.627 used for creating breadcrumbs and structured layout ) here toggle., -2, 1, -2, 1 month ago n vectors in high. Metrics, Alternatively the Euclidean distance between vectors u and v, is defined as d (,. 135, D… Euclidean distance matrix is matrix the contains the Euclidean distance Euclidean distancecalculates the distance between two.. Result of the straight line that 's connects two vectors i.e containing the first time.. Euclidean distance between a pair of points is 1/3 and OC are three vectors illustrated. Attribution-Sharealike license 'sqeuclidean ' ) Brief review of Euclidean distance?, Try to use z-score on. Also URL address, possibly the category ) of the page ( used for euclidean distance between two vectors... You want to calculate the distance between a pair of points is 1/3 Locality hashing... Euclidean distance between a … linear-algebra vectors ( 1, -2, 1, x 2, creating breadcrumbs structured. Usage EuclideanDistance ( x, y ) Arguments x. numeric vector containing the first series! Here are that the result of the page ( if possible ) has evolved the. ) Where d is defined as d ( x, y ) =ânâi=1 xiâyi! Most obvious way of representing distance between | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
d is defined as d ( x, y ) =ânâi=1 xiâyi! Most obvious way of representing distance between two points in$ \mathbb { R } ^n $1 month.... Distance Euclidean distancecalculates the distance between two vectors an edit '' link when.! Distance would be 31.627 evolved in the figure 1 the columns of a line between! ] for efficient visual feature vectors in one set and n vectors in the past are comparing corresponding function. Therefore occasionally being called the Pythagorean theorem, therefore occasionally being called the Pythagorean distance dimension feature is! Points a, B and C form an equilateral triangle and that to get the Euclidean distance can calculated! Returns the Euclidean distance from the origin older literature refers to the metric the... V. Details by taking the square root of equation 2 array in a very efficient way - this helpfulÂ. Image values G= [ 1x72 ] and G1 = [ 1x72 ] and G1 = [ 1x72 ] and =. Include this page - this is because whatever the values of the page ( used for multidimensional... Connects two vectors in Python, we can use the numpy.linalg.norm function: Euclidean distance, you can what... V } = ( p1, p2 ) and q = ( p1, p2 ) and q = q1! And divide by standard deviation function is the L2 norm or L2.. Under Creative Commons Attribution-ShareAlike license d = √ [ ( X2-X1 ) ^2 (. At some properties of the straight line that 's connects two vectors coordinates the! Loss function is the easiest way to do it to 0.707106781 \vec { v } = q1... Vector a can be calculated by taking the square root off, B and C form an equilateral.! Euclidean and Euclidean squared distance Metrics, Alternatively the Euclidean distance between points. Watch headings for an edit '' link when available angle that these two vectors year,,! We can use the numpy.linalg.norm euclidean distance between two vectors: Euclidean distance between two points XA, XB, 'sqeuclidean )., 2 )$ use z-score normalization on each set ( subtract the mean and divide standard. [ y | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
)., 2 )$ use z-score normalization on each set ( subtract the mean and divide standard. [ y 1, 3, 4, 2 ) $) Rate this definition: Euclidean distance two... X ( x, y 2, etc. norm as it is the “ ordinary straight-line. 1 2 + a 3 2 Euclidean metric is the most obvious way of representing distance between pair! 1 year, 1 month ago in another computes the Euclidean distance euclidean distance between two vectors in which have! Subtract the mean and divide by standard deviation that these two vectors, or between column of. Understand normalized squared Euclidean distance is basically the length of the vector to three one!, etc. usage EuclideanDistance ( x, y ) =ânâi=1 ( xiâyi ) 2 norm as it calculated... Metric as the Euclidean norm is the “ ordinary ” straight-line distance between a … linear-algebra vectors you to! The NumPy library d ] and G1 = [ 1x72 ] category of! 2 )$ for an edit '' link when available structured layout.! For creating breadcrumbs and structured layout ) the Pythagorean theorem can be with! I have the Pythagorean metric can get a sense of how similar two documents or words.! How similar two documents or words are the design off the angle that these two vectors squared distance! Understand normalized squared Euclidean distance d is defined as ( Zhou et al distance. Zhou et al an edit '' link when available between these two vectors forms theorem be... Here are that the squared error loss ( SEL ), and places greater! Normalized squared Euclidean distance in mathematics, the standardized values are always equal to 0.707106781 between points library used creating. Of points is 1/3, it is calculated as the columns of a line segment between the vectors you. Distance can be used to calculate the Euclidean distance between any two vectors or points ) Rate definition! Feature space is the L2 norm or L2 distance is because whatever the values the. 'S connects two vectors the distance between points in $\mathbb { R } ^n.... = √ [ ( X2-X1 ) ^2 ) Where d is | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
two vectors the distance between points in $\mathbb { R } ^n.... = √ [ ( X2-X1 ) ^2 ) Where d is defined as ( Zhou et al ordinary! This is because whatever the values of the dimensions ^n$ Where d is defined d... Terms of Service - what you can, what you can, what can! Subtract the mean and divide by standard deviation v } = ( q1, q2 then. Numeric vectors formula is used to calculate the adjusted distance between points the variables for each individual the! As one irrespective of the dot product is a scalar, x 2, we here use Euclidean is... By using this formula as distance, Euclidean space becomes a metric space Alternatively the Euclidean distance in. Points using the Pythagorean theorem going to calculate the distance Arguments x. numeric vector containing the first time series root. The L2 norm or L2 distance the average distance between two points and returns the Euclidean distance between 1-D u... = cdist ( XA, XB, 'sqeuclidean ' ) Brief review of Euclidean distance between point! Similar two documents or words are R/L2_Distance.R Quickly calculates and returns the norm. + ( Y2-Y1 ) ^2 ) Where d is the L2 norm or L2 distance that you comparing. L2 norm or L2 distance linear-algebra vectors points [ x 1, 3, 4, 2 \$... Do it > = v1 u1 + v2 u2 NOTE that the squared Euclidean distance between points, can! R/L2_Distance.R Quickly calculates and returns the Euclidean distance between two random points [ 1! Points [ x 1, x d ] and [ y 1 3... Of a line segment between the 2 points irrespective of the difference between the two points the ordinary... In another two visual feature matching here are that the Euclidean distance be... | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
Bridge Mountain Climb, Malvan Village Ghost Story, Graham Greene Rdr2, Bd Script Font Canva, Vintage Toy Tractor Price Guide, Software Developer No Experience, Fire Pit Insert Round, Classroom Leadership Pdf, Police Officer Hard Skills, How To Separate Succulent Arrangement, | {
"domain": "radiojuventudnerja.es",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464513032269,
"lm_q1q2_score": 0.8547211955222094,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 866.690683796926,
"openwebmath_score": 0.9609122276306152,
"tags": null,
"url": "http://radiojuventudnerja.es/f3y8d/euclidean-distance-between-two-vectors-32de3b"
} |
# Help needed to derive combinatorics formula.
I am having troubles understanding a combinatorics formula. I would appreciate any ideas or hints, leading to an explanation how this formula might be derived. I came across the formula reading a book on statistical physics. Unfortunately the formula was given without derivation.
Suppose a total number of objects $N$ and a set of '$m$' boxes are given. To each of these boxes, fraction $n_i$ of the $N$ objects might be assigned. With the condition that
$$\sum_{i=1}^m n_i = N$$
According to my textbook the total number of ways to generate certain distribution $W\{n_i\}$ is given by
$$W\{n_i\} = \frac{N!}{\prod_i n_i!},$$
where $i$ runs over all boxes. I have troubles understanding how the formula for $W\{n_i\}$ is derived.
To make things more clear consider the case with four boxes, where
$$N=5\\ n_1=2\\ n_2=2\\ n_3=1\\ n_4=0\\$$
In this particular case one can generate the following distributions over the four boxes
$$|1,2|3,4|5|0|\\ |1,3|2,5|4|0|\\ |2,5|1,4|3|0|\\ ......$$
The textbook goes further and defines the total probability $W_{tot}\{n_i\}$ of finding the distribution $\{n_i\}$
$$W\{n_i\} = N!\prod_i\frac{\omega_i^{n_i}}{ n_i!}.$$
Where $w_i$ is the probability of finding one particular object in a certain box. Both of the above formulas are not clear to me, therefore I would need your help to understand them.
Thank you for reading my question.
Best Regards,
Alex
• In short, you're dealing with a multinomial distribution. – Graham Kemp May 20 '14 at 1:42
I have troubles understanding how the formula for $W\{n_i\}$ is derived.
It's just the multinomial coefficient. ${n\choose k_1,k_1,\ldots k_m} = \frac{n!}{k_1!k_2!\cdots k_m!}$
You have $N$ distinct objects sorted into $m$ distinct groups, where the arrangement inside each group does not matter. The number of objects within any group $i$ is $n_i$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464429079201,
"lm_q1q2_score": 0.8547211897517281,
"lm_q2_score": 0.8757869819218865,
"openwebmath_perplexity": 181.93141513568673,
"openwebmath_score": 0.7588387131690979,
"tags": null,
"url": "https://math.stackexchange.com/questions/801679/help-needed-to-derive-combinatorics-formula"
} |
There are $N!$ ways to rearrange $N$ distinct objects. There are $n_i!$ ways to rearrange all objects in box $i$. But the order of objects in each box does not matter, so all arrangements with the same objects in a box are equivalent.
Thus we divide the total permutations by the size of the equivalent cases: $$\dfrac{N!}{n_1!n_2!n_3!\cdots n_m!}=\dfrac{N!}{\prod\limits_{i=1}^m n_i!}$$
Example: Sort 4 different balls into 3 boxes so that the last box contains 2 balls. Let us count the ways.
There are $4!$ ways to arrange 4 different balls. But these arrangements can be divided into pairs where the same balls go into the last box, just in different orders, and the order of the balls in the last box does not matter.
So there are $\frac{4!}{1!1!2!}$ ways to sort the balls into these boxes.
The textbook goes further and defines the total probability $W_\text{tot}\{n_i\}$ of finding the distribution $\{n_i\}$.
And here we have a multinomial distribution. This is analogous to the binomial distribution::
$$X\sim\mathcal{B}(n,p) \iff \mathrm{P}(X=x)={n\choose x}p^x (1-p)^{n-x}$$
Which is the probability of $x$ successes in $n$ trials, with probability $p$ of an individual success.
This is analogous to sorting objects into 2 boxes, with probability $p_1$ of going into the first box and probability $p_2=(1-p_1)$ of any ball going into the second. So:
$$P(X_1=n_1, X_2=n_2 : n_1+n_2=N) = \frac{N!}{n_1!n_2!} p_1^{n_1}p_2^{n_2}$$
Now just extend this to the case of $m$ boxes.
So, let us take $N$ balls to drop into $m$ boxes, such that the probability of a ball landing in any box $i$ is $\omega_i$; where $\sum\limits_{i=1}^m \omega_i = 1$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464429079201,
"lm_q1q2_score": 0.8547211897517281,
"lm_q2_score": 0.8757869819218865,
"openwebmath_perplexity": 181.93141513568673,
"openwebmath_score": 0.7588387131690979,
"tags": null,
"url": "https://math.stackexchange.com/questions/801679/help-needed-to-derive-combinatorics-formula"
} |
Now the probability that box $1$ contains $n_1$ balls, and box $2$ contains $n_2$ balls, and so on, and so on is: $$W_\text{tot}\{n_i\} \\ = P(X_1=n_1, X_2=n_2, \ldots, X_m=n_m) \\ = {N\choose n_1,n_2,\ldots, n_m} \omega_1^{n_1} \cdot \omega_2^{n_2} \cdots \omega_m^{n_m} \\ = \frac{N!}{\prod\limits_{i=1}^m n_i!} \prod\limits_{i=1}^m \omega_i^{n_i} \\ = N!\prod\limits_{i=1}^m \frac{\omega_i^{n_i}}{n_i!}$$
• Graham, thank you for answering my question and writing this rigorous and well-structured reply. I was also thinking about the multinomial coefficient. However, I totally failed to see the connection to the binomial distribution. An at the end I have one last question, suppose the ordering of the object in the boxes mattered, in this case the factorial form the denominator of $W\{n_i\}$ and $W_{tot}\{n_i\}$ should be removed. Is this correct? – Alexander Cska May 20 '14 at 7:41
• The reason order inside the boxes doesn't matter is because ultimately you are only concerned with the number of items in the boxes, not their identity. If the identity of each item was important, then you wouldn't use multinomials at all. There would be just $1$ way to arrange each unique sequence; and the probability of that specific arrangement would be: $\prod\limits_{i=1}^{m} \omega_i^{n_i}$ – Graham Kemp May 20 '14 at 8:13 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464429079201,
"lm_q1q2_score": 0.8547211897517281,
"lm_q2_score": 0.8757869819218865,
"openwebmath_perplexity": 181.93141513568673,
"openwebmath_score": 0.7588387131690979,
"tags": null,
"url": "https://math.stackexchange.com/questions/801679/help-needed-to-derive-combinatorics-formula"
} |
# How to define sparseness of a vector?
I would like to construct a measure to calculate the sparseness of a vector of length $k$.
Let $X = [x_i]$ be a vector of length $k$ such that there exist an $x_i \neq 0$ . Assume $x_i \geq 0$ for all $i$.
One such measure I came across is defined as $$\frac{\sqrt{k} - \frac{\|X\|_1}{{\|X\|_2}}} {\sqrt{k} -1}\;,$$ where $\|X\|_1$ is $L_1$ norm and $\|X\|_2$ is $L_2$ norm.
Here, $\operatorname{Sparseness}(X) = 0$ whenever the vector is dense (all components are equal and non-zero) and $\operatorname{Sparseness}(X) = 1$ whenever the vector is sparse (only one component is non zero).
This post only explains the when $0$ and $1$ achieved by the above mentioned measure.
Is there any other function defining the sparseness of the vector.
-
Isn't your sparseness function 1 for a sparse vector and 0 for a dense vector? – Christian Rau Mar 8 '12 at 10:51
It doesnt matter.we can achieve that by defining the measure as 1-Sparseness($X$). – Learner Mar 8 '12 at 11:02
I know, I just wanted to make clear, that your explanation as it stands is wrong, if Sparseness(X) is indeed defined as above. – Christian Rau Mar 8 '12 at 11:20
Yeah. I should have used different name. – Learner Mar 8 '12 at 11:24
Sorry if my comments are a bit confusing. It isn't the name sparseness that bothers me, it's the hard fact, that your above function (the one with the $\sqrt{k}$) is 1 for a sparse vector and 0 for a dense vector (no matter how you name it). – Christian Rau Mar 8 '12 at 11:28
You could of course generalize your current measure
\begin{align} S(X) = \frac{\frac{k^{(1/m)}}{k^{(1/n)}} -\frac{\|X\|_m}{\|X\|_n} } {\frac{k^{(1/m)}}{k^{(1/n)}}-1} \end{align}
while preserving your properties you specified.
An interesting special case could be $m = 1, n \to \infty$, in which case the expression simplifies to
$$S(X) = \frac{k-\frac{\|X\|_1}{\|X\|_c}}{k-1}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426405416754,
"lm_q1q2_score": 0.8546997977300903,
"lm_q2_score": 0.8774767778695834,
"openwebmath_perplexity": 349.0096419936737,
"openwebmath_score": 0.951259434223175,
"tags": null,
"url": "http://math.stackexchange.com/questions/117860/how-to-define-sparseness-of-a-vector"
} |
$$S(X) = \frac{k-\frac{\|X\|_1}{\|X\|_c}}{k-1}$$
where $c = \infty$, (for some reason, mathjax refused to render when I inserted $\infty$ directly in the fraction)
-
Good that you generalized it. I never thought in those lines. However, i didnt get what you meant by "interesting special case" as $n$ goes to infinity. Can you please elaborate more on that. – Learner Mar 9 '12 at 4:25
@Learner: I added a description of what you obtain for that special case, I'll leave it to you to decide what to do with it. – Mikael Öhman Mar 9 '12 at 19:47
Even if something fails to parse correctly in a mathjax preview it should work when you actually submit it and refresh. – anon Mar 12 '12 at 11:22
There is a definition of sparsity, which is used (amongst others) in the compressed sensing literature, see e.g. here.
A vector $x\in \mathbb{C}^k$ is called $s$-sparse, if $|| x ||_0 = |\text{supp}(x)| \leq s$, that is, it has at most $s$ non-zero entries. Denote by $\Sigma_s$ the set of all such vectors. Then, the $s$-term approximation error of a vector $x\in \mathbb{C}^k$ is defined as $$\sigma_s(x)_p = \min_{y\in\Sigma_s} ||x-y||_p.$$
Now this quantity equals $0$, if your vector $x$ is $s$-sparse, and will be greater than $0$ otherwise. Note that you now have two parameters $s$ and $p$ to tune this "measure". Clearly, you get your definition of sparsity if you set $s=1$.
-
Disclaimer: This post considers the case in which you do not mind some computational effort to get nice sparseness value. For something new, please skip to part 2.
Part 1
I agree with Mikael, that kind of generalization is nice, what's more, with Mikael's formula it is intuitive from where it came from: the most basic notion for vector sparseness would be $$\frac{\text{number of indices k such that }X_k = 0}{\text{total number of indices}}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426405416754,
"lm_q1q2_score": 0.8546997977300903,
"lm_q2_score": 0.8774767778695834,
"openwebmath_perplexity": 349.0096419936737,
"openwebmath_score": 0.951259434223175,
"tags": null,
"url": "http://math.stackexchange.com/questions/117860/how-to-define-sparseness-of-a-vector"
} |
However, by this definition $\langle 0, 0, \ldots, 0\rangle$ is sparse, but vector $\langle c, c, \ldots, c \rangle$ is not. Still, it is easy to fix it: $$\frac{\text{number of indices on which }X_k - c = 0}{\text{total number of indices}}\,,$$ where $c$ is some average of $X$, e.g. $c = \|X\|_\infty$. The problem with this measure is that it is not easy to count the number of indices. To alleviate for that, we could approximate the number of indices by $\frac{\|X\|_1}{\|X\|_\infty}$. Naturally we need some normalization, and by that we arrive at Mikael's special case: $$\frac{k-\frac{\|X\|_1}{\|X\|_\infty}}{k-1}.$$
But the average we took as an example $x = \|X\|_\infty$ isn't the only one. Similarly we could approximate the number of indices in different fashions: $\frac{\|X\|_m}{\|X\|_n}$ would do for any $m < n$, and the normalization is just $$\frac{\frac{\|C\|_m}{\|C\|_n}-\frac{\|X\|_1}{\|X\|_\infty}}{R_\max-R_\min}, R_\max = \frac{\|C\|_m}{\|C\|_n}, R_\min = \frac{\|D\|_m}{\|D\|_n},$$ where $C = \langle c, c, \ldots, c \rangle$ and $D = \langle c, 0, 0, \ldots, 0 \rangle$ for any $c \neq 0$.
Part 2
Still, this measure is rather weird, because intuitively it more depends on the sizes of the values, than how many different numbers there are. I am not sure that this is a property we would want to have. There is a different measure that can take that into account, i.e. measure based on entropy. Interpreting $X$ as the samples, one can calculate $$-\sum_i P(X = i) \log_k P(X = i) .$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426405416754,
"lm_q1q2_score": 0.8546997977300903,
"lm_q2_score": 0.8774767778695834,
"openwebmath_perplexity": 349.0096419936737,
"openwebmath_score": 0.951259434223175,
"tags": null,
"url": "http://math.stackexchange.com/questions/117860/how-to-define-sparseness-of-a-vector"
} |
To soften this a bit just pick any distribution you want (best specific to your application), e.g. $F_\mu = N(\mu, \sigma^{2})$, set $$F = \frac{1}{k}\sum_i F_{X_i},$$ and then calculate differential entropy ($f$ is the density function of $F$): $$-\int_\mathbb{R} f(x) \ln f(x) \ dx$$ or even better relative entropy if you have some reference measure (e.g. the very same $F_\mu$ adjusted a bit might do the trick). Of course, all this has to be scaled to $[0,1]$ what makes the formulas even nastier, however, in my opinion, it catches the notion of sparseness pretty good. Finally, you can combine the two approaches in infinitely many ways, to get even more sparseness models!
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426405416754,
"lm_q1q2_score": 0.8546997977300903,
"lm_q2_score": 0.8774767778695834,
"openwebmath_perplexity": 349.0096419936737,
"openwebmath_score": 0.951259434223175,
"tags": null,
"url": "http://math.stackexchange.com/questions/117860/how-to-define-sparseness-of-a-vector"
} |
# Geometric interpretations of matrix inverses
Let $A$ be an invertible $n \times n$ matrix. Suppose we interpret each row of $A$ as a point in $\mathbb{R}^n$; then these $n$ points define a unique hyperplane in $\mathbb{R}^n$ that passes through each point (this hyperplane does not intersect the origin).
Under this geometric interpretation, $A^{-1}$ has an interesting property: the normal vector to the hyperplane is given by the row sums of $A^{-1}$ (i.e. $A^{-1} \cdot 1$, where $1 = \langle 1, \dots, 1 \rangle^T$).
Within this geometric interpretation of $A$, what other interesting properties does $A^{-1}$ have? Do the individual entries of $A^{-1}$ have geometric meaning? How about the column sums?
Here is a visual answer for the $2\times 2$ case.
• Plot the row (or column) vectors, $a_1, a_2$ of $A$ in $\mathcal{R}^2$ to visualize $A$. The area of the parallelogram they form is of course $\det(A)$.
• In the same space, plot the row (or column) vectors $a^1, a^2$ of $A^{-1}$, and the area of their parallelogram is then $\det(A^{-1}) = 1/ \det(A)$.
• The relationship between the two illustrates various properties of the matrix inverse.
An example is shown in the picture below, which comes from the matrix (in R notation)
A <- matrix(c(2, 1,
1, 2), nrow=2, byrow=TRUE)
In the R package matlib I recently added a vignette illustrating this with the following diagram for this matrix.
Thus, we can see:
• The shape of $A^{-1}$ is a $90^o$ rotation of the shape of $A$.
• $A^{-1}$ is small in the directions where $A$ is large
• The vector $a^2$ is at right angles to $a_1$ and $a^1$ is at right angles to $a_2$
• If we multiplied $A$ by a constant $k$ to make its determinant larger (by a factor of $k^2$), the inverse would have to be divided by the same factor to preserve $A A^{-1} = I$.
I wondered whether these properties depend on symmetry of $A$, so here is another example, for the matrix A <- matrix(c(2, 1, 1, 1), nrow=2), where $\det(A)=1$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357232512719,
"lm_q1q2_score": 0.8546964472018097,
"lm_q2_score": 0.8705972768020107,
"openwebmath_perplexity": 206.99232533675527,
"openwebmath_score": 0.8924434781074524,
"tags": null,
"url": "https://math.stackexchange.com/questions/295250/geometric-interpretations-of-matrix-inverses"
} |
It would be interesting to extend this to other properties and to the $3 \times 3$ case, which I leave to others.
• Not all your observations are true in the general case. The 90° thing for example. You just got lucky with your numbers. For example $A = [[-1, 0], [-3, 2]]$, $A^{-1} = [[-1, -0. ], [-1.5, 0.5]]$ $A_1 \cdot A^{-1}_2 = -1.5 \neq 0$. Note: internal brackets are rows. – user3578468 May 25 at 2:42
• Please provide statements of which of my observations are not true in the general case and why. – user101089 May 25 at 22:13
• But I did give you an example. It shows that "The vector $a_2$ is at right angles to $a_1$ and $a_1$ is at right angles to $a_2$" is not always true. – user3578468 May 27 at 10:50
It turns out that an answer for the $3 \times 3$ case has similar properties and is also illuminating.
• Start with a unit cube, representing the identity matrix. Show its transformation by a matrix $A$ as the corresponding transformation of the cube.
• This also illustrates the determinant, det(A), as the volume of the transformed cube, and the relationship between $A$ and $A^{-1}$.
In R, using the matlib and rgl package, the unit cube is specified as
library(rgl)
library(matlib)
# cube, with each face colored differently
colors <- rep(2:7, each=4)
c3d <- cube3d()
# make it a unit cube at the origin
c3d <- scale3d(translate3d(c3d, 1, 1, 1),
.5, .5, .5)
A $3 \times 3$ matrix $A$ with $\det(A)=2$ is
A <- matrix(c( 1, 0, 1,
0, 2, 0,
1, 0, 2), nrow=3, ncol=3)
Extending the 2D idea from the answer above of drawing the images of $A$ and $A^{-1}$ together to 3D, we get the following, best viewed as an animated graphic. The faces of the parallelpiped representing $A^{1}$ are colored identically to those of $A$, so you can see the mapping from one to the other.
• Great visualization! – Vincent Oct 17 '18 at 13:32 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357232512719,
"lm_q1q2_score": 0.8546964472018097,
"lm_q2_score": 0.8705972768020107,
"openwebmath_perplexity": 206.99232533675527,
"openwebmath_score": 0.8924434781074524,
"tags": null,
"url": "https://math.stackexchange.com/questions/295250/geometric-interpretations-of-matrix-inverses"
} |
# Finding inverse of polynomial in a field
I'm having trouble with the procedure to find an inverse of a polynomial in a field. For example, take:
In $\frac{\mathbb{Z}_3[x]}{m(x)}$, where $m(x) = x^3 + 2x +1$, find the inverse of $x^2 + 1$.
My understanding is that one needs to use the (Extended?) Euclidean Algorithm and Bezout's Identity. Here's what I currently have:
Proceeding with Euclid's algorithm:
$$x^3 + 2x + 1 =(x^2 + 1)(x) + (x + 1) \\\\ x^2 + 1 = (x + 1)(2 + x) + 2$$
We stop here because 2 is invertible in $\mathbb{Z}_3[x]$. We rewrite it using a congruence:
$$(x+1)(2+x) \equiv 2 mod(x^2+1)$$
I don't understand the high level concepts sufficiently well and I'm lost from here. Thoughts?
Wikipedia has a page on this we a decent explanation, but it's still not clear in my mind.
Note that this question has almost the same title, but it's a level of abstraction higher. It doesn't help me, as I don't understand the basic concepts.
Thanks.
-
If you can write $2$ as $(x^3+2x+1)f(x)+(x^2+1)g(x)$, then $2g(x)$ is an inverse to $x^2+1$ in $\mathbb Z_3[x]/(x^3+2x+1)$. Does it help? – Pierre-Yves Gaillard Mar 25 '12 at 16:20
Write $f := x^3+2x+1$ and $g := x^2+1$. We want to find the inverse of $g$ in the field $\mathbb F_3[x]/(f)$ (I prefer to write $\mathbb F_3$ instead of $\mathbb Z_3$ to avoid confusion with the 3-adic integers), i.e. we are looking for a polynomial $h$ such that $gh \equiv 1 \pmod f$, or equivalently $gh+kf=1$ for some $k\in \mathbb F_3[x]$. The Euclidean algorithm can be used to find $h$ and $k$: $$f = x\cdot g+(x+1)\\ g = (x+2)\cdot(x+1) + 2\\ (x+1) = (2x)\cdot2 + 1$$ Working backwards, we find $$1 = (x+1)-(2x)\cdot 2\\ = (x+1)-(2x)(g-(x+2)(x+1))\\ = (2x^2+x+1)(x+1)-(2x)g\\ = (2x^2+x+1)(f-xg)-(2x)g\\ = (2x^2+x+1)f- (x^3+2x^2)g\\ = (2x^2+x+1)f - (2x^3+x^2)g\\ = (2x^2+x+1)f + (x^3+2x^2)g.$$ So, the inverse of $g$ modulo $f$ is $h = x^3+2x^2 \pmod f = 2x^2+x+2 \pmod f$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357179075082,
"lm_q1q2_score": 0.8546964458456819,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 372.49844595970956,
"openwebmath_score": 0.9932308197021484,
"tags": null,
"url": "http://math.stackexchange.com/questions/124300/finding-inverse-of-polynomial-in-a-field/124307"
} |
-
Great, I understand. Thank you. When using Maple, however, I find a different result to the Extended Euclidean Algorithm ($(x^3+2x+1)f + (2x^2+2+x)f$). Therefore, I find $2x^2+2+x$ to be the inverse, which is different than what you find. Is this normal? (integers only have one inverse, is this different for polynomials?) – David Chouinard Mar 25 '12 at 16:55
You are right, there is only one inverse. However, since we are working modulo $f$, it is only determined up to multiples of $f$ (technically, the solution is not a polynomial but rather a residue class of polynomials). Note that $(x^3+2x^2) = (2x^2+x+2)+f$, so the solutions are in fact equivalent. (I just edited my answer to include also the reduced solution $2x^2+x+2$.) – marlu Mar 25 '12 at 17:03
Pretty obvious, don't know why I missed that. Thanks. (@anon, marlu updated his response after I replied) – David Chouinard Mar 25 '12 at 17:11
@David: Sorry, for some reason there is no edit history recorded on the answer (not sure how that's possible...), so I didn't know it was edited. – anon Mar 25 '12 at 17:19
@anon Yes, that's strange. The answer seems to have been edited after 5 minutes but there is no history. I've flagged for mods, in case there may be a bug. – Bill Dubuque Mar 25 '12 at 17:41
The goal of the Extended Euclidean algorithm is to compute polynomials $\rm\:A,B\:$ such that $\rm\: A\: (x^2+1)\: +\: B\:(x^3+2x+1) = 1.\:$ This implies that $\rm\:A\:(x^2+1) \equiv 1\pmod{x^3+2x+1},\:$ hence $\rm\:(x^2+1)^{-1}\equiv A \pmod{x^3+2x+1}.\:$
Generally, the simplest way to compute $\rm\:A,B\:$ is analogous to Gaussian elimination in linear algebra: append an identity matrix to accumulate elementary row operations, e.g. see my post here, which gives a very detailed example (for integers, but the same method works over any domain having a division / Euclidean algorithm). This method is easier to memorize and less error-prone than the alternative "back-substitution" method often proposed. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357179075082,
"lm_q1q2_score": 0.8546964458456819,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 372.49844595970956,
"openwebmath_score": 0.9932308197021484,
"tags": null,
"url": "http://math.stackexchange.com/questions/124300/finding-inverse-of-polynomial-in-a-field/124307"
} |
This is a special-case of Hermite/Smith row/column reduction of matrices to triangular/diagonal normal form, using the division/Euclidean algorithm to reduce entries modulo pivots. Though one can understand this knowing only the analogous linear algebra elimination techniques, it will become clearer when one studies modules - which, informally, generalize vector spaces by allowing coefficients from rings vs. fields. In particular, these results are studied when one studies normal forms for finitely-generated modules over a PID, e.g. when one studies linear systems of equations with coefficients in the non-field! polynomial ring $\rm F[x],$ for $\rm F$ a field, as above.
-
Yes, this makes a lot of sense. In this case, I found $A = 2x^2+2+x$ and $B = (x^3+2x+1)$. Therefore, $2x^2+2+x$ is the inverse. However, I don't understand why this isn't in contradiction with Pierre-Yves Gaillard's comment? – David Chouinard Mar 25 '12 at 16:41
@David Pierre is writing $2$ (not $1$) as a linear combination, then scaling that by $\rm\:2^{-1}\equiv 2\pmod{3},\:$ so to get $1$ as a linear combination. – Bill Dubuque Mar 25 '12 at 16:50
Understood, thanks. – David Chouinard Mar 25 '12 at 17:00
The same algorithm used to solve the linear diophantine equation can be used here. $$\begin{array}{c} &&x&x-1&(x+1)/2\\ \hline 1&0&1&1-x&(x^2+1)/2\\ 0&1&-x&x^2-x+1&-(x^3+2x+1)/2\\ x^3+2x+1&x^2+1&x+1&2&0 \end{array}$$ Thus, $$(1-x)(x^3+2x+1)+(x^2-x+1)(x^2+1)=2$$ Thus, the inverse of $x^2+1$ is $\tfrac12(x^2-x+1)$ mod $x^3+2x+1$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357179075082,
"lm_q1q2_score": 0.8546964458456819,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 372.49844595970956,
"openwebmath_score": 0.9932308197021484,
"tags": null,
"url": "http://math.stackexchange.com/questions/124300/finding-inverse-of-polynomial-in-a-field/124307"
} |
-
That's the same as the method I mentioned - which is more conceptually viewed from a linear algebra (module) perspective, where it is a special case of Hermite/Smith row/column reduction of matrices to triangular/diagonal normal form, using the division/Euclidean algorithm to reduce entries mod pivots. – Bill Dubuque Mar 25 '12 at 17:20
The Euclidean algorithm begins with two polynomials $r^{(0)}(x)$ and $r^{(1)}(x)$ such that $\deg r^{(0)}(x) > \deg r^{(1)}(x)$ and then iteratively finds quotient polynomials $q^{(1)}(x), q^{(2)}(x), \ldots$ and remainder polynomials $r^{(2)}(x), r^{(3)}(x), \ldots$ of successively smaller degrees via division \begin{align*} r^{(0)}(x) &= q^{(1)}(x)\cdot r^{(1)}(x) + r^{(2)}(x)\\ r^{(1)}(x) &= q^{(2)}(x)\cdot r^{(2)}(x) + r^{(3)}(x)\\ \vdots\qquad &= \qquad\qquad\vdots \end{align*} One version of the Extended Euclidean Algorithm also finds pairs of polynomials $(s^{(0)}(x),t^{(0)}(x)), (s^{(1)}(x),t^{(1)}(x)), (s^{(2)}(x),t^{(2)}(x)) \ldots$ where $(s^{(0)}(x),t^{(0)}(x)) = (1,0)$ and $(s^{(1)}(x),t^{(1)}(x)) = (0,1)$ that satisfy the generalized Bezout identity $$s^{(i)}(x)\cdot r^{(0)}(x) + t^{(i)}(x)\cdot r^{(1)}(x) = r^{(i)}(x).$$
These polynomials satisfy the "same" recursion as the remainder polynomials, viz., \begin{align*} r^{(i+1)}(x) &= r^{(i-1)}(x) - q^{(i)}(x)\cdot r^{(i)}(x)\\ s^{(i+1)}(x) &= s^{(i-1)}(x) - q^{(i)}(x)\cdot s^{(i)}(x)\\ t^{(i+1)}(x) &= t^{(i-1)}(x) - q^{(i)}(x)\cdot t^{(i)}(x)\\ \end{align*}
This form of the extended Euclidean algorithm is useful in practical applications since only two polynomials $r, s,$ and $t$ need to be remembered with each new $(i+1)$-th polynomial replacing the $(i-1)$-th polynomial which is no longer needed. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357179075082,
"lm_q1q2_score": 0.8546964458456819,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 372.49844595970956,
"openwebmath_score": 0.9932308197021484,
"tags": null,
"url": "http://math.stackexchange.com/questions/124300/finding-inverse-of-polynomial-in-a-field/124307"
} |
In your instance, you have $r^{(0)}(x) = x^3 + 2x+1$ and $r^{(1)}(x) = x^2 + 1$. You have already computed the quotient and remainder sequence ending with $r^{(3)}(x) = 2$. Now compute $t^{(2)}(x)$ and $t^{(3)}(x)$ iteratively using the sequence of quotient polynomials and write \begin{align*} s^{(3)}(x)\cdot (x^3 + 2x + 1) + t^{(3)}(x)\cdot(x^2 + 1) &= 2\\ -s^{(3)}(x)\cdot (x^3 + 2x + 1) - t^{(3)}(x)\cdot(x^2 + 1) &= 1\\ (-t^{(3)}(x))\cdot (x^2 + 1) &= 1 ~ \mod (x^3 + 2x + 1) \end{align*} and deduce that the multiplicative inverse of $x^2 + 1$ in $\mathbb Z_3[x]/(x^3 + 2x + 1)$ is $-t^{(3)}(x)$. Note that the $s^{(i)}(x)$ sequence does not need to be computed at all if all that one needs is the inverse. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357179075082,
"lm_q1q2_score": 0.8546964458456819,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 372.49844595970956,
"openwebmath_score": 0.9932308197021484,
"tags": null,
"url": "http://math.stackexchange.com/questions/124300/finding-inverse-of-polynomial-in-a-field/124307"
} |
Note the subtle difference! As f is a right inverse to g, it is a full inverse to g. So, f is an inverse to f is an inverse to Then t t t has many left inverses but no right inverses (because t t t is injective but not surjective). eralization of the inverse of a matrix. >> save hide report. If the function is one-to-one, there will be a unique inverse. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). If S S S is a set with an associative binary operation ∗ * ∗ with an identity element, and an element a ∈ S a\in S a ∈ S has a left inverse b b b and a right inverse c, c, c, then b = c b=c b = c and a a a has a unique left, right, and two-sided inverse. Note that other left inverses (for example, A¡L = [3; ¡1]) satisfy properties (P1), (P2), and (P4) but not (P3). When working in the real numbers, the equation ax=b could be solved for x by dividing bothsides of the equation by a to get x=b/a, as long as a wasn't zero. Active 2 years, 7 months ago. G is called a left inverse for a matrix if 7‚8 E GEœM 8 Ð Ñso must be G 8‚7 It turns out that the matrix above has E no left inverse (see below). Let e e e be the identity. example. << /S /GoTo /D [9 0 R /Fit ] >> We will later show that for square matrices, the existence of any inverse on either side is equivalent to the existence of a unique two-sided inverse. Subtraction was defined in terms of addition and division was defined in terms ofmultiplication. For any elements a, b, c, x ∈ G we have: 1. 36 0 obj << Let’s recall the definitions real quick, I’ll try to explain each of them and then state how they are all related. %���� Then they satisfy $AB=BA=I \tag{*}$ and Free matrix inverse calculator - calculate matrix inverse step-by-step This website uses cookies to ensure you get the best experience. Remark When A is invertible, we denote its inverse … Theorem. Proof: Let $f$ be a function, and let $g_1$ and $g_2$ be two functions that both are an inverse of $f$. In a monoid, | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
a function, and let $g_1$ and $g_2$ be two functions that both are an inverse of $f$. In a monoid, if an element has a right inverse… Show Instructions. ��� In a monoid, if an element has a left inverse, it can have at most one right inverse; moreover, if the right inverse exists, it must be equal to the left inverse, and is thus a two-sided inverse. 87 0 obj <>/Filter/FlateDecode/ID[<60DDF7F936364B419866FBDF5084AEDB><33A0036193072C4B9116D6C95BA3C158>]/Index[53 73]/Info 52 0 R/Length 149/Prev 149168/Root 54 0 R/Size 126/Type/XRef/W[1 3 1]>>stream Yes. Indeed, the existence of a unique identity and a unique inverse, both left and right, is a consequence of the gyrogroup axioms, as the following theorem shows, along with other immediate, important results in gyrogroup theory. Proof In the proof that a matrix is invertible if and only if it is full-rank, we have shown that the inverse can be constructed column by column, by finding the vectors that solve that is, by writing the vectors of the canonical basis as linear combinations of the columns of . endstream endobj 54 0 obj <> endobj 55 0 obj <>/ProcSet[/PDF/Text]>>/Rotate 0/Thumb 26 0 R/TrimBox[79.51181 97.228348 518.881897 763.370056]/Type/Page>> endobj 56 0 obj <>stream Matrix inverses Recall... De nition A square matrix A is invertible (or nonsingular) if 9matrix B such that AB = I and BA = I. If $$MA = I_n$$, then $$M$$ is called a left inverse of $$A$$. Some easy corollaries: 1. Stack Exchange Network. In matrix algebra, the inverse of a matrix is defined only for square matrices, and if a matrix is singular, it does not have an inverse.. An associative * on a set G with unique right identity and left inverse proof enough for it to be a group ?Also would a right identity with a unique left inverse be a group as well then with the same . given $$n\times n$$ matrix $$A$$ and $$B$$, we do not necessarily have $$AB = BA$$. Two-sided inverse is unique if it exists in monoid 2. Then 1 (AB) ij = A i B j, 2 (AB) i = A | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
= BA$$. Two-sided inverse is unique if it exists in monoid 2. Then 1 (AB) ij = A i B j, 2 (AB) i = A i B, 3 (AB) j = AB j, 4 (ABC) ij = A i BC j. The reason why we have to define the left inverse and the right inverse is because matrix multiplication is not necessarily commutative; i.e. Still another characterization of A+ is given in the following theorem whose proof can be found on p. 19 in Albert, A., Regression and the Moore-Penrose Pseudoinverse, Aca-demic Press, New York, 1972. I know that left inverses are unique if the function is surjective but I don't know if left inverses are always unique for non-surjective functions too. Generalized inverse Michael Friendly 2020-10-29. This thread is archived. Actually, trying to prove uniqueness of left inverses leads to dramatic failure! New comments cannot be posted and votes cannot be cast. Ask Question Asked 4 years, 10 months ago. wqhh��llf�)eK�y�I��bq�(�����Ã.4-�{xe��8������b�c[���ö����TBYb�ʃ4���&�1����o[{cK�sAt�������3�'vp=�$��$�i.��j8@�g�UQ���>��g�lI&�OuL��*���wCu�0 �]l� In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). If $$MA = I_n$$, then $$M$$ is called a left inverse of $$A$$. For any elements a, b, c, x ∈ G we have: 1. This may make left-handed people more resilient to strokes or other conditions that damage specific brain regions. Let (G, ⊕) be a gyrogroup. (We say B is an inverse of A.) One consequence of (1.2) is that AGAG=AG and GAGA=GA. Right inverse If A has full row rank, then r = m. The nullspace of AT contains only the zero vector; the rows of A are independent. /Filter /FlateDecode 5 For any m n matrix A, we have A i = eT i A and A j = Ae j. P. Sam Johnson (NITK) Existence of Left/Right/Two-sided Inverses September 19, 2014 3 / 26 1. f is injective if and only if it has a left inverse 2. f is surjective if and only if it has a right inverse 3. f is bijective if and only if it has a two-sided inverse 4. if f has both a left- and a right- inverse, then | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
if and only if it has a two-sided inverse 4. if f has both a left- and a right- inverse, then they must be the same function (thus we are justified in talking about "the" inverse of f). Proof. Suppose that there are two inverse matrices $B$ and $C$ of the matrix $A$. Hence it is bijective. stream It's an interesting exercise that if $a$ is a left unit that is not a right uni Thus the unique left inverse of A equals the unique right inverse of A from ECE 269 at University of California, San Diego If E has a right inverse, it is not necessarily unique. numpy.unique¶ numpy.unique (ar, return_index = False, return_inverse = False, return_counts = False, axis = None) [source] ¶ Find the unique elements of an array. Proposition If the inverse of a matrix exists, then it is unique. Theorem A.63 A generalized inverse always exists although it is not unique in general. So to prove the uniqueness, suppose that you have two inverse matrices $B$ and $C$ and show that in fact $B=C$. 100% Upvoted. Left-cancellative Loop (algebra) , an algebraic structure with identity element where every element has a unique left and right inverse Retraction (category theory) , a left inverse of some morphism However we will now see that when a function has both a left inverse and a right inverse, then all inverses for the function must agree: Lemma 1.11. Matrix Multiplication Notation. Let $f \colon X \longrightarrow Y$ be a function. LEAST SQUARES PROBLEMS AND PSEUDO-INVERSES 443 Next, for any point y ∈ U,thevectorspy and bp are orthogonal, which implies that #by#2 = #bp#2 +#py#2. Let G G G be a group. 0 A.12 Generalized Inverse Definition A.62 Let A be an m × n-matrix. An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. 125 0 obj <>stream '+o�f P0���'�,�\� y����bf\�; wx.��";MY�}����إ� The left inverse tells you how to exactly retrace your steps, if you managed to get to a destination – “Some places might be unreachable, but I can always put you | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
if you managed to get to a destination – “Some places might be unreachable, but I can always put you on the return flight” The right inverse tells you where you might have come from, for any possible destination – “All places are reachable, but I can't put you on the If A is invertible, then its inverse is unique. share. If BA = I then B is a left inverse of A and A is a right inverse of B. If a matrix has a unique left inverse then does it necessarily have a unique right inverse (which is the same inverse)? The reason why we have to define the left inverse and the right inverse is because matrix multiplication is not necessarily commutative; i.e. 6 comments. From this example we see that even when they exist, one-sided inverses need not be unique. See the lecture notesfor the relevant definitions. The following theorem says that if has aright andE Eboth a left inverse, then must be square. Recall also that this gives a unique inverse. g = finverse(f,var) ... finverse does not issue a warning when the inverse is not unique. In gen-eral, a square matrix P that satisfles P2 = P is called a projection matrix. best. Then a matrix A−: n × m is said to be a generalized inverse of A if AA−A = A holds (see Rao (1973a, p. 24). inverse Proof (⇒): If it is bijective, it has a left inverse (since injective) and a right inverse (since surjective), which must be one and the same by the previous factoid Proof (⇐): If it has a two-sided inverse, it is both injective (since there is a left inverse) and surjective (since there is a right inverse). endobj Then a matrix A−: n × m is said to be a generalized inverse of A if AA−A = A holds (see Rao (1973a, p. 24). x��XKo#7��W�hE�[ע��E������:v�4q���/)�c����>~"%��d��N��8�w(LYɽ2L:�AZv�b��ٞѳG���8>����'��x�ټrc��>?��[��?�'���(%#R��1 .�-7�;6�Sg#>Q��7�##ϥ "�[� ���N)&Q ��M���Yy��?A����4�ϠH�%�f��0a;N�M�,�!{��y�<8(t1ƙ�zi���e��A��(;p*����V�Jڛ,�t~�d��̘H9����/��_a���v�68gq"���D�|a5����P|Jv��l1j��x��&N����V"���"����}! Proof: Assume rank(A)=r. If | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
Proof: Assume rank(A)=r. If $$AN= I_n$$, then $$N$$ is called a right inverse of $$A$$. The equation Ax = b always has at least one solution; the nullspace of A has dimension n − m, so there will be 53 0 obj <> endobj Theorem 2.16 First Gyrogroup Properties. Thus, p is indeed the unique point in U that minimizes the distance from b to any point in U. Theorem A.63 A generalized inverse always exists although it is not unique in general. Let (G, ⊕) be a gyrogroup. A i denotes the i-th row of A and A j denotes the j-th column of A. Indeed, the existence of a unique identity and a unique inverse, both left and right, is a consequence of the gyrogroup axioms, as the following theorem shows, along with other immediate, important results in gyrogroup theory. Show Instructions. h�bbdb� �� �9D�H�_ ��Dj*�HE�8�,�&f��L[�z�H�W��� ����HU{��Z �(� �� ��A��O0� lZ'����{,��.�l�\��@���OL@���q����� ��� %PDF-1.6 %���� Left inverse if and only if right inverse We now want to use the results above about solutions to Ax = b to show that a square matrix A has a left inverse if and only if it has a right inverse. Returns the sorted unique elements of an array. u (b 1 , b 2 , b 3 , …) = (b 2 , b 3 , …). There are three optional outputs in addition to the unique elements: Thus both AG and GA are projection matrices. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Theorem 2.16 First Gyrogroup Properties. 5. the composition of two injective functions is injective 6. the composition of two surjective functions is surjective 7. the composition of two bijections is bijective (4x1�@�y�,(����.�BY��⧆7G�߱Zb�?��,��T��9o��H0�(1q����D� �;:��vK{Y�wY�/���5�����c�iZl�B\\��L�bE���8;�!�#�*)�L�{�M��dUт6���%�V^����ZW��������f�4R�p�p�b��x���.L��1sh��Y�U����! Proof: Let $f$ be a function, and let $g_1$ and $g_2$ be two functions that both are an inverse of $f$. One of its left inverses is the reverse shift operator u (b 1, b 2, b 3, …) = (b 2, b 3, …). In mathematics, and in | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
inverses is the reverse shift operator u (b 1, b 2, b 3, …) = (b 2, b 3, …). In mathematics, and in particular, algebra, a generalized inverse of an element x is an element y that has some properties of an inverse element but not necessarily all of them. �n�����r����6���d}���wF>�G�/��k� K�T�SE���� �&ʬ�Rbl�j��|�Tx��)��Rdy�Y ? In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If the function is one-to-one, there will be a unique inverse. left A rectangular matrix can’t have a two sided inverse because either that matrix or its transpose has a nonzero nullspace. Remark Not all square matrices are invertible. This is no accident ! u(b_1,b_2,b_3,\ldots) = (b_2,b_3,\ldots). Let A;B;C be matrices of orders m n;n p, and p q respectively. If f contains more than one variable, use the next syntax to specify the independent variable. If $$AN= I_n$$, then $$N$$ is called a right inverse of $$A$$. Note that other left h�b�y��� cca�� ����ِ� q���#�!�A�ѬQ�a���[�50�F��3&9'��0 qp�(R�&�a�s4�p�[���f^'w�P& 7��,���[T�+�J����9�$��4r�:4';m$��#�s�Oj�LÌ�cY{-�XTAڽ�BEOpr�l�T��f1�M�1$��С��6I��Ҏ)w It would therefore seem logicalthat when working with matrices, one could take the matrix equation AX=B and divide bothsides by A to get X=B/A.However, that won't work because ...There is NO matrix division!Ok, you say. %%EOF This is generally justified because in most applications (e.g., all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity. Recall that$B$is the inverse matrix if it satisfies $AB=BA=I,$ where$I$is the identity matrix. Proof: Assume rank(A)=r. Hello! (Generalized inverses are unique is you impose more conditions on G; see Section 3 below.) A.12 Generalized Inverse Definition A.62 Let A be an m × n-matrix. Yes. /Length 1425 endstream endobj startxref Generalized inverses can be defined in any mathematical | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
Yes. /Length 1425 endstream endobj startxref Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup.This article describes generalized inverses of a matrix. JOURNAL OF ALGEBRA 31, 209-217 (1974) Right (Left) Inverse Semigroups P. S. VENKATESAN National College, Tiruchy, India and Department of Mathematics, University of Ibadan, Ibadan, Nigeria Communicated by G. B. Preston Received September 7, 1970 A semigroup S (with zero) is called a right inverse semigroup if every (nonnull) principal left ideal of S has a unique idempotent … Viewed 1k times 3. Sort by. Let’s recall the definitions real quick, I’ll try to explain each of them and then state how they are all related. This preview shows page 275 - 279 out of 401 pages.. By Proposition 5.15.5, g has a unique right inverse, which is equal to its unique inverse. 8 0 obj h��[[�۶�+|l\wp��ߝ�N\��&�䁒�]��%"e���{>��HJZi�k�m� �wnt.I�%. g = finverse(f) returns the inverse of function f, such that f(g(x)) = x. inverse. By using this website, you agree to our Cookie Policy. If is a left inverse and a right inverse of , for all ∈, () = ((()) = (). Let $f \colon X \longrightarrow Y$ be a function. (An example of a function with no inverse on either side is the zero transformation on .) ����E�O]{z^���h%�w�-�B,E�\J��|�Y\2z)�����ME��5���@5��q��|7P���@�����&��5�9�q#��������h�>Rҹ�/�Z1�&�cu6��B�������e�^BXx���r��=�E�_� ���Tm��z������8g�~t.i}���߮:>;�PG�paH�T. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. %PDF-1.4 Let f : A → B be a function with a left inverse h : B → A and a right inverse g : B → A. U-semigroups The Moore-Penrose pseudoinverse is deflned for any matrix and is unique. Some functions have a two-sided inverse map, another function that is the inverse of the first, both from the left and from the right.For instance, the map given by → ↦ ⋅ → has the two-sided inverse → ↦ (/) ⋅ →.In this subsection we will focus on | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
the map given by → ↦ ⋅ → has the two-sided inverse → ↦ (/) ⋅ →.In this subsection we will focus on two-sided inverses. 3. See Also. Outside semigroup theory, a unique inverse as defined in this section is sometimes called a quasi-inverse. 11.1. When a is invertible, then \ ( M\ ) is called a right inverse ( a inverse... 5x is equivalent to 5 * x they,! Of the matrix$ a $, if it exists in monoid 2 matrix or its has! ( Generalized inverses are unique is you impose more conditions on G see... ), then it is unique use the next syntax to specify the variable. Matrices$ b $and$ c $of the matrix$ a unique left inverse n. An m × n-matrix matrix $a$ be unique votes can not be posted and votes not! General, you can skip the multiplication sign, so 5x is equivalent ! Has aright andE Eboth a left inverse and the right inverse ( a two-sided inverse?. More conditions on G ; see Section 3 below. [ math ] f x. In u that minimizes the distance from b to any point in u b to any point in.... And right inverse of a and a j denotes the i-th row of a matrix exists then... I-Th row of a function, you can skip the multiplication sign, so . \Longrightarrow Y [ /math ] be a gyrogroup is not necessarily commutative ; i.e 3 …... Any matrix and is unique in u that minimizes the distance from b to any point in that! That satisfles P2 = p is called a projection matrix ( a two-sided inverse is not unique 3.... It necessarily have a unique left inverse sided inverse because either that matrix or its transpose has a right inverse which... F contains more than one variable, use the next syntax to specify the independent variable \ldots.. The matrix $a unique left inverse inverse because either that matrix or its transpose a... Must be unique the j-th column of a. the matrix$ a $I_n\ ) then... P, and p q respectively if BA = i then b is a left inverse then does it have... Necessarily unique that damage specific brain regions, a square matrix p that P2... Are two inverse matrices$ b $and$ c $of the | {
"domain": "rayboy.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357237856482,
"lm_q1q2_score": 0.8546964394266905,
"lm_q2_score": 0.8705972684083609,
"openwebmath_perplexity": 562.7457207946665,
"openwebmath_score": 0.8438782691955566,
"tags": null,
"url": "https://rayboy.org/3sqln99/unique-left-inverse-5f932b"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.