text stringlengths 1 2.12k | source dict |
|---|---|
To prove this is in fact very very hard. (It's easy to show that $\lambda_1^2$, ..., $\lambda_n^2$ are all eigenvalues of $A$ by considering their eigenvectors, but unless you the dimensions of the eigenspaces match the multiplicities you're stuck.)
However, the proof of the following statement is actually perfectly possible using elementary arguments (albeit clever arguments):
Suppose A is an $n\times n$ matrix with eigenvalues $\lambda_1$, ..., $\lambda_n$, including each eigenvalue according to its multiplicity. Then for any polynomial $g(x)$, $g(A)$ has eigenvalues $g(\lambda_1)$, ..., $g(\lambda_n)$ including multiplicity.
-
A nice example appeared on this web site today: Every prime number $p\ge 5$ has $24\mid p^2-1$ .
As posed, the problem sounds like it might be difficult. But it is very easy to show the more general result that every $n$ of the form $6k\pm 1$ has the required property.
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9343951680216529,
"lm_q1q2_score": 0.857123176955193,
"lm_q2_score": 0.9173026641072386,
"openwebmath_perplexity": 539.8623476551608,
"openwebmath_score": 0.9017611145973206,
"tags": null,
"url": "http://math.stackexchange.com/questions/899109/problems-that-become-easier-in-a-more-general-form"
} |
# For a prime $p$, determine the number of positive integers whose greatest proper divisor is $p$
I'm having a bit of difficulty writing a graceful proof for the following problem:
For a prime $p$, determine the number of positive integers whose greatest proper divisor is $p$.
Let $A$ be the set of positive integers whose greatest proper divisor is $p$. I will show that $A=\{\alpha p\,|\,\alpha \;\text{prime},\; \alpha \leq p\}$ so that $|A|=\pi(p)$, the number of primes less than or equal to $p$.
Assume that $n=\alpha p$ for some prime $\alpha \leq p$. The factors of $n$ are $1, \alpha, p, \alpha p$. (In the case that $\alpha =p$, the factors of $n$ are $1,p, p^2$.) The greatest proper divisor of $n$ therefore is $p$ and so $n\in A$.
Conversely, for any number in $A$, clearly we have that $p$ is the greatest prime dividing it. Moreover, any three primes dividing it would contradict $p$ being the greatest divisor (this can be seen in the following way: if $\alpha_1\neq \alpha_2$ are two primes dividing the number, neither of which is $p$, then we have $\alpha_1<p<\alpha_1p<\alpha_1\alpha_2p$ from which we infer that $p$ is not the greatest proper divisor.) Therefore, a number in $A$ must have at most two prime factors, one being $p$. (We can rule out that a number in $A$ has exactly the one prime factor, $p$, since it has no proper divisors.) Hence, $A\subset \{\alpha p\,|\,\alpha \;\text{prime},\; \alpha \leq p\}$.
Perhaps there's a more elegant proof, but I'm concerned with my writing style. Can anyone help me rephrase my argument in a smoother way or critique it?
edit: incorporated the changes suggested by Henning.
-
Isn't that what I did in the first paragraph? – sasha Oct 5 '11 at 22:48
perhaps you did – Henry Oct 5 '11 at 22:54
Looks sound and direct to me. There is only minor copy-editing to do, such as:
• The condition $2\le\alpha$ is redundant; it is already implied by $\alpha$ being prime. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806523850542,
"lm_q1q2_score": 0.8571032351424774,
"lm_q2_score": 0.8740772450055544,
"openwebmath_perplexity": 188.67262868551668,
"openwebmath_score": 0.9306151866912842,
"tags": null,
"url": "http://math.stackexchange.com/questions/70177/for-a-prime-p-determine-the-number-of-positive-integers-whose-greatest-proper"
} |
• The condition $2\le\alpha$ is redundant; it is already implied by $\alpha$ being prime.
• The part "Let $n\in\{\alpha p\,|\,\alpha \;\text{prime},\; 2\leq \alpha \leq p\}$. Then $n=\alpha p$ for some prime $\alpha$, $2\leq \alpha \leq p$" is probably more verbose than you need to be. I would just write, "Assume that $n=\alpha p$ for some prime $\alpha \leq p$."
• To make the structure of the proof more explicit, you could write "Conversely," rather than "Now," at the beginning of the third paragraph. This tells the reader that you have now finished something and is proceeding to the opposite direction of what you just proved.
• Calling the other prime $q$ would be somewhat more conventional than $\alpha$.
-
Great, thank you for the suggestions. Last thing--if you could judge from this one post, would you say that my mathematical writing is verbose overall? Definitely would like to change that... – sasha Oct 6 '11 at 7:49 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806523850542,
"lm_q1q2_score": 0.8571032351424774,
"lm_q2_score": 0.8740772450055544,
"openwebmath_perplexity": 188.67262868551668,
"openwebmath_score": 0.9306151866912842,
"tags": null,
"url": "http://math.stackexchange.com/questions/70177/for-a-prime-p-determine-the-number-of-positive-integers-whose-greatest-proper"
} |
# Math Help - trig integration
1. ## trig integration
After I complete $\int \frac{\sqrt{x^2-9}}{x}\,dx$ by sec substitution I get
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer:
$3\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
After differentiating the result my answer is different as well... I thought the answers we got would be equivalent though...
Then the second question is $f(x) = 3$. Find the equation.
2. Originally Posted by DivideBy0
After I complete $\int \frac{\sqrt{x^2-9}}{3}\,dx$ by sec substitution I get
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer:
$3\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
After differentiating the result my answer is different as well... I thought the answers we got would be equivalent though...
Then the second question is $f(x) = 3$. Find the equation.
I guess the right integral is: $\int \frac{\sqrt{x^2-9}}{x}\,dx$
3. Woops, sorry, you're right that's what I meant
4. Hello, DivideBy0!
After I complete $\int \frac{\sqrt{x^2-9}}{x}\,dx$ by sec substitution I get: . $3\sec^{-1}\! \left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer: / $3\tan^{-1}\!\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
The answers are equivalent.
Let $\theta \:=\:\sec^{-1}\!\left(\frac{x}{3}\right)$
Then we have: . $\sec\theta \:=\:\frac{x}{3} \:=\:\frac{hyp}{adj}$
$\theta$ is in a right triangle with: $adj = 3,\;hyp = x$
. . Using Pythagorus: . $opp = \sqrt{x^2-9}$
So: . $\tan\theta \:=\:\frac{\sqrt{x^2-9}}{3}$
Hence:. . $\theta \:=\:\tan^{-1}\!\left(\frac{\sqrt{x^2-9}}{3}\right)$
See? . . . . . $\sec^{-1}\!\left(\frac{x}{3}\right) \;=\;\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right)$
5. Originally Posted by DivideBy0
After I complete $\int \frac{\sqrt{x^2-9}}{x}\,dx$ by sec substitution I get
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806523850543,
"lm_q1q2_score": 0.857103233534213,
"lm_q2_score": 0.87407724336544,
"openwebmath_perplexity": 2682.5602332306307,
"openwebmath_score": 0.957465648651123,
"tags": null,
"url": "http://mathhelpforum.com/calculus/24574-trig-integration.html"
} |
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer:
$3\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
After differentiating the result my answer is different as well... I thought the answers we got would be equivalent though...
Then the second question is $f(x) = 3$. Find the equation.
the answar is
$-3\sec^{-1}\left(\frac{x}{3}\right)+\sqrt{x^2-9}+C$.
or
$-3\cos^{-1}\left(\frac{3}{x}\right)+\sqrt{x^2-9}+C$.
6. Thanks Soroban for clearing it up a bit more... but when I differentiated on the calculator I still got
$\frac{d}{\,dx}\left(3\sec^{-1}(\frac{x}{3})-\sqrt{x^2-3}\right)=\frac{9|\frac{1}{x}|}{\sqrt{x^2-9}}-\frac{x}{\sqrt{x^2-3}}$
But the original expression obviously has no absolute value signs.
7. Originally Posted by DivideBy0
$\int \frac{\sqrt{x^2-9}}{x}\,dx$
You don't even need trig. sub. - this only requires a simple substitution.
Step #1: The make-up.
$\int {\frac{{\sqrt {x^2 - 9} }}
{x}\,dx} = \int {\frac{{\sqrt {x^2 - 9} \cdot x}}
{{x^2 }}\,dx} .$
Step #2: The substitution.
$u^2 = x^2 - 9 \implies u\,du = x\,dx,$
$\int {\frac{{\sqrt {x^2 - 9} }}
{x}\,dx} = \int {\frac{{u^2 }}
{{u^2 + 9}}\,du} .$
Step #3: Simple trick & mission almost-accomplished.
$\int {\frac{{u^2 }}
{{u^2 + 9}}\,du} = \int {du} - 9\int {\frac{1}
{{u^2 + 9}}\,du} = u - 3\arctan \frac{u}
{3}+k.$
Step #4: Back substitute.
$\int {\frac{{\sqrt {x^2 - 9} }}
{x}\,dx} = \sqrt {x^2 - 9} - 3\arctan \frac{{\sqrt {x^2 - 9} }}
{3} + k.$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806523850543,
"lm_q1q2_score": 0.857103233534213,
"lm_q2_score": 0.87407724336544,
"openwebmath_perplexity": 2682.5602332306307,
"openwebmath_score": 0.957465648651123,
"tags": null,
"url": "http://mathhelpforum.com/calculus/24574-trig-integration.html"
} |
# Ways of selecting $3$ balls out of $9$ balls if at least one black ball is to be selected
A box contains two white, three black, and four red balls. In how many ways can three balls be drawn from the box if at least one black ball is to be included in the draw.
The answer given is $$64$$. I attempted the problem with two different approaches.
My two different attempts:
Attempt #1
Number of ways in which any $$3$$ balls can be drawn out of $$9$$ balls $$\binom{9}{3}=84$$ Number of ways in which no black balls are drawn out of $$9$$ balls (choosing $$3$$ balls from the remaining $$6$$ balls) $$\binom{6}{3}=20$$ Thus, the number of ways of choosing at least $$1$$ black ball is $$84-20=64$$
Attempt #2
Number of ways of drawing $$1$$ black ball out of $$3$$ black balls $$\binom{3}{1}=3$$ Now, we have to draw two more balls, we can choose those balls from the $$8$$ remaining balls $$\binom{8}{2}=28$$ Since both the above events are associated with each other, by fundamental principle of counting, the number of ways of drawing at least one black ball out of the $$9$$ balls is $$3\times28=84$$
I think my second attempt should also be right. Please explain what I'm doing wrong with my second attempt.
The second attempt counts the combination $$\{B_1,B_2,B_3\}$$ three times as $$(B_1, \{B_2, B_3\})$$, $$(B_2, \{B_3, B_1\})$$ and $$(B_3, \{B_1, B_2\})$$.
N.B: $$(B_i, \{B_j, B_k\})$$ means pick $$B_i$$ first, followed by the combination $$\{B_j, B_k\}$$.
We have the basic product rule
$$|A\times B| = \#\{(a,b) \mid A \times B\} = |A| \times |B|$$
for cardinals which holds for any sets $$A$$ and $$B$$.
By writing $$3 \times 28$$, you are actually counting $$\{B_1, B_2, B_3\} \times \{B_2,B_3, \dots\}$$, which doesn't answer the problem.
In , $$(a,b)$$ is defined as $$\{a,\{a,b\}\}$$, so order matters. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806518175514,
"lm_q1q2_score": 0.8571032249968491,
"lm_q2_score": 0.8740772351648677,
"openwebmath_perplexity": 154.21553549898297,
"openwebmath_score": 0.8001653552055359,
"tags": null,
"url": "https://math.stackexchange.com/questions/3030285/ways-of-selecting-3-balls-out-of-9-balls-if-at-least-one-black-ball-is-to-be"
} |
In , $$(a,b)$$ is defined as $$\{a,\{a,b\}\}$$, so order matters.
• Would you please elaborate the answer? I get that the second attempt is wrong, but I'm unable to understand it clearly. – Azelf Dec 7 '18 at 20:14
• @Azelf My original notation would be clear enough. I've edited my answer to make it even clearer. – GNUSupporter 8964民主女神 地下教會 Dec 7 '18 at 20:57
A selection of three balls that includes at least one black ball has either one black ball and two of the other six balls, two blacks balls and one of the other six balls, or three black balls and none of the other six balls. Therefore, the number of ways of selecting at least one black ball when three balls are selected from two white, three black, and four red balls is $$\binom{3}{1}\binom{6}{2} + \binom{3}{2}\binom{6}{1} + \binom{3}{3}\binom{6}{0} = 45 + 18 + 1 = 64$$
You counted each case in which $$k$$ black balls were selected $$k$$ times, once for each of the $$k$$ ways you could have designated one of those black balls as the designated black ball. Notice that $$\color{red}{\binom{1}{1}}\binom{3}{1}\binom{6}{2} + \color{red}{\binom{2}{1}}\binom{3}{2}\binom{6}{1} + \color{red}{\binom{3}{1}}\binom{3}{3}\binom{6}{0} = 45 + 36 + 3 = 84$$ To illustrate, place numbers on the black balls. If you select black balls $$b_1$$ and $$b_2$$ and a red ball, you count this selection twice: $$\begin{array}{c c} \text{designated black ball} & \text{additional balls}\\ b_1 & b_2, r\\ b_2 & b_1, r \end{array}$$ If you select all three black balls, you count this selection three times: $$\begin{array}{c c} \text{designated black ball} & \text{additional balls}\\ b_1 & b_2, b_3\\ b_2 & b_1, b_3\\ b_3 & b_1, b_2 \end{array}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806518175514,
"lm_q1q2_score": 0.8571032249968491,
"lm_q2_score": 0.8740772351648677,
"openwebmath_perplexity": 154.21553549898297,
"openwebmath_score": 0.8001653552055359,
"tags": null,
"url": "https://math.stackexchange.com/questions/3030285/ways-of-selecting-3-balls-out-of-9-balls-if-at-least-one-black-ball-is-to-be"
} |
Is there a combinatorial way to see the link between the beta and gamma functions?
The Wikipedia page on the beta function gives a simple formula for it in terms of the gamma function. Using that and the fact that $\Gamma(n+1)=n!$, I can prove the following formula: $$\begin{eqnarray*} \frac{a!b!}{(a+b+1)!} & = & \frac{\Gamma(a+1)\Gamma(b+1)}{\Gamma(a+1+b+1)}\\ & = & B(a+1,b+1)\\ & = & \int_{0}^{1}t^{a}(1-t)^{b}dt\\ & = & \int_{0}^{1}t^{a}\sum_{i=0}^{b}\binom{b}{i}(-t)^{i}dt\\ & = & \int_{0}^{1}\sum_{i=0}^{b}\binom{b}{i}(-1)^{i}t^{a+i}dt\\ & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\int_{0}^{1}t^{a+i}dt\\ & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\left[\frac{t^{a+i+1}}{a+i+1}\right]_{t=0}^{1}\\ & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\\ b! & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(a+b+1)!}{a!(a+i+1)} \end{eqnarray*}$$ This last formula involves only natural numbers and operations familiar in combinatorics, and it feels very much as if there should be a combinatoric proof, but I've been trying for a while and can't see it. I can prove it in the case $a=0$: $$\begin{eqnarray*} & & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(b+1)!}{0!(i+1)}\\ & = & \sum_{i=0}^{b}(-1)^{i}\frac{b!(b+1)!}{i!(b-i)!(i+1)}\\ & = & b!\sum_{i=0}^{b}(-1)^{i}\frac{(b+1)!}{(i+1)!(b-i)!}\\ & = & b!\sum_{i=0}^{b}(-1)^{i}\binom{b+\text{1}}{i+1}\\ & = & b!\left(1-\sum_{i=0}^{b+1}(-1)^{i}\binom{b+\text{1}}{i}\right)\\ & = & b! \end{eqnarray*}$$ Can anyone see how to prove it for arbitrary $a$? Thanks! | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874123575265,
"lm_q1q2_score": 0.8570800988397201,
"lm_q2_score": 0.8652240964782012,
"openwebmath_perplexity": 176.87786336871503,
"openwebmath_score": 0.8793419003486633,
"tags": null,
"url": "https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function"
} |
• The usual interpretation of "combinatoric proof" (that I'm accustomed to) is to show that the beta function counts something; what exactly do you mean by "combinatoric proof" here? Oct 12, 2011 at 17:15
• In any event: it might be more interesting to establish this relationship instead... Oct 12, 2011 at 17:18
• I'm with @J.M. - your derivation for $a=0$ doesn't really look like a combinatorial proof, as you're using only symbolic manipulation instead of counting and combining objects.
– anon
Oct 12, 2011 at 21:03
Here's a combinatorial argument for $a!\, b! = \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(a+b+1)!}{(a+i+1)}$, which is just a slight rewrite of the identity you want to show.
Suppose you have $a$ red balls numbered $1$ through $a$, $b$ blue balls numbered $1$ through $b$, and one black ball.
Question: How many permutations of the balls have all the red balls first, then the black ball, and then the blue balls?
Answer 1: $a! \,b!$. There are $a!$ ways to choose the red balls to go in the first $a$ slots, $b!$ ways to choose the blue balls to go in the last $b$ slots, and $1$ way for the black ball to go in slot $a+1$.
Answer 2: Let $A$ be the set of all permutations in which the black ball appears after all the red balls (irrespective of where the blue balls go). Let $B_i$ be the subset of $A$ such that the black ball appears after blue ball $i$. Then the number of permutations we're after is also given by $|A| - \left|\bigcup_{i=1}^b B_i\right|$. Since the probability that the black ball appears last of any particular $a+i+1$ balls is $\frac{1}{a+i+1}$, and there are $(a+b+1)!$ total ways to arrange the balls, by the principle of inclusion-exclusion we get $$\frac{(a+b+1)!}{a+1} - \sum_{i=1}^{b}\binom{b}{i}(-1)^{i+1}\frac{(a+b+1)!}{(a+i+1)} = \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(a+b+1)!}{(a+i+1)}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874123575265,
"lm_q1q2_score": 0.8570800988397201,
"lm_q2_score": 0.8652240964782012,
"openwebmath_perplexity": 176.87786336871503,
"openwebmath_score": 0.8793419003486633,
"tags": null,
"url": "https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function"
} |
• Fantastic! How did you find this? Oct 12, 2011 at 22:17
• @Steven: I thought about it for way too long. :) More seriously, an alternating binomial sum smells like inclusion-exclusion to me. I also thought I could generalize my answer to a similar question, and that turned out to work, although it took a while to get the formulation right. I kept trying to apply inclusion-exclusion to the full set of permutations, and it finally hit me that I only needed to consider subsets of the set I call $A$. And thanks! Oct 12, 2011 at 22:30
• Nicely done indeed!
– robjohn
Oct 13, 2011 at 0:37
• @robjohn: And thanks for the edit. Not sure how I managed to leave that out! :) Oct 13, 2011 at 1:32
• Beautiful, this is exactly the kind of answer I was hoping for, thank you! Oct 13, 2011 at 7:04
Using partial fractions, we have that $$\frac{1}{(a+1)(a+2)\dots(a+b+1)}=\frac{A_1}{a+1}+\frac{A_2}{a+2}+\dots+\frac{A_{b+1}}{a+b+1}\tag{1}$$ Use the Heaviside Method; multiply $(1)$ by $(a+k)$ and set $a=-k$ to solve $(1)$ for $A_k$: $$A_k=\frac{(-1)^{k-1}}{(k-1)!(b-k+1)!}=\frac{(-1)^{k-1}}{b!}\binom{b}{k-1}\tag{2}$$ Plugging $(2)$ into $(1)$, yields $$\frac{a!}{(a+b+1)!}=\sum_{k=1}^{b+1}\frac{(-1)^{k-1}}{b!}\binom{b}{k-1}\frac{1}{a+k}\tag{3}$$ Multiplying $(3)$ by $b!$ and reindexing, gives us $$\frac{a!b!}{(a+b+1)!}=\sum_{k=0}^{b}(-1)^k\binom{b}{k}\frac{1}{a+k+1}\tag{4}$$ and $(4)$ is your identity.
Update: Starting from the basic binomial identity $$(1-x)^b=\sum_{k=0}^b(-1)^k\binom{b}{k}x^k\tag{5}$$ multiply both sides of $(5)$ by $x^a$ and integrate from $0$ to $1$: $$B(a+1,b+1)=\sum_{k=0}^b(-1)^k\binom{b}{k}\frac{1}{a+k+1}\tag{6}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874123575265,
"lm_q1q2_score": 0.8570800988397201,
"lm_q2_score": 0.8652240964782012,
"openwebmath_perplexity": 176.87786336871503,
"openwebmath_score": 0.8793419003486633,
"tags": null,
"url": "https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function"
} |
• FYI: This argument appears on pages 188-189 of Concrete Mathematics, 2nd edition, where it is discussed in the context of the $n$th forward difference formula. Oct 12, 2011 at 20:50
• This identity is one of my favorite uses of partial fractions and it turns up when using Euler's Transform for series acceleration.
– robjohn
Oct 12, 2011 at 21:00
• @Mike: not surprising since it computes the $b^{th}$ forward difference of $\frac{1}{a+1}$. Thanks for the reference!
– robjohn
Oct 12, 2011 at 21:02
Seven years later I found another way to attack this. Define $$f(b, a) = \frac{a!b!}{(a+b+1)!}$$ and $$h(b, a) = \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}$$. To connect the two, we define $$g$$ such that $$g(0, a) = \frac{1}{a + 1}$$ and $$g(b + 1, a) = g(b, a) - g(b, a + 1)$$ and prove by induction in $$b$$ that $$f = g = h$$. In each case the base case is straightforward and we consider only the inductive step.
$$\begin{eqnarray*} & & g(b + 1, a) \\ & = & g(b, a) - g(b, a + 1) \\ & = & f(b, a) - f(b, a + 1) \\ & = & \frac{a!b!}{(a+b+1)!} - \frac{(a+1)!b!}{(a+b+2)!} \\ & = & \frac{a!b!(a + b + 2)}{(a+b+2)!} - \frac{a!b!(a+1)}{(a+b+2)!} \\ & = & \frac{a!b!(b+1)}{(a+b+2)!} \\ & = & \frac{a!(b+1)!}{(a+b+2)!} \\ & = & f(b+1, a)\\ \end{eqnarray*}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874123575265,
"lm_q1q2_score": 0.8570800988397201,
"lm_q2_score": 0.8652240964782012,
"openwebmath_perplexity": 176.87786336871503,
"openwebmath_score": 0.8793419003486633,
"tags": null,
"url": "https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function"
} |
$$\begin{eqnarray*} & & g(b + 1, a) \\ & = & g(b, a) - g(b, a + 1) \\ & = & h(b, a) - h(b, a + 1) \\ & = & \left(\sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\right) - \left(\sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+2}\right) \\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\right) - \left(\sum_{i=0}^{b-1}\binom{b}{i}(-1)^{i}\frac{1}{a+i+2}\right) - (-1)^{b}\frac{1}{a+b+2}\\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\right) + \left(\sum_{i=1}^{b}\binom{b}{i-1}(-1)^{i}\frac{1}{a+i+1}\right) + (-1)^{b+1}\frac{1}{a+b+2}\\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\left(\binom{b}{i} + \binom{b}{i-1}\right)(-1)^{i}\frac{1}{a+i+1}\right) + (-1)^{b+1}\frac{1}{a+b+2}\\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\binom{b+1}{i}(-1)^{i}\frac{1}{a+i+1}\right) + (-1)^{b+1}\frac{1}{a+b+2}\\ & = & \sum_{i=0}^{b+1}\binom{b+1}{i}(-1)^{i}\frac{1}{a+i+1} \\ & = & h(b + 1, a) \\ \end{eqnarray*}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874123575265,
"lm_q1q2_score": 0.8570800988397201,
"lm_q2_score": 0.8652240964782012,
"openwebmath_perplexity": 176.87786336871503,
"openwebmath_score": 0.8793419003486633,
"tags": null,
"url": "https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function"
} |
# Taxonomy of polygons
I've written a tree-like layout to help myself remember which polygons are sub-types of others, because I always get confused. I was just wondering if this is right:
|quadrilateral
|parallelogram
|rectangle
|square
|oblong
|rhomboid
|kite (corrected after rschwieb's answer, a rhombus is a kite)
|rhombus
|square
|trapezoid(AmE) / trapezium (BrE)
So a square is a rhombus and a parallelogram.
Also, I know that there are two definitions of "trapezoid." Under the inclusive definition "trapezoid" is immediately under "quadrilateral" in the tree and above parallelogram and kite. Under this definition all squares are trapezoids.
Is my tree correct, at least ignoring the difference in the trapezoid definition difference?
Edit: Thanks to rschwieb for helping me realise that a rhombus is a kite. There is also a nice Euler diagram Wikipedia
You need to see this post I wrote some time ago.
In short, the education system (at least in the US) has confused this issue and made it harder than it has to be.
There is a very natural hierarchy the depends on logical connections between quadrilaterals, and there is really no benefit to using the “exclusive version” of definitions.
I would argue for this picture for the main characters:
Actually there is a little puzzle where you can figure out a new node to insert between "quadrilateral" and "kite" which also connects to "parallelogram," and I have never seen this shape mentioned in a textbook. It's just not common enough to encounter in normal life. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731115849662,
"lm_q1q2_score": 0.8570623079832539,
"lm_q2_score": 0.89029422102812,
"openwebmath_perplexity": 830.8944931659497,
"openwebmath_score": 0.7078220248222351,
"tags": null,
"url": "https://math.stackexchange.com/questions/2746757/taxonomy-of-polygons"
} |
• I see in your chart you've used the inclusive definition of trapezoid, that's all good. I see the difference between this chart and the "frankenstein" chart you referred to is that the kite is in a separate area on its own, but in yours it traces down to rhombus. In yours it shows a rhombus as being a kite. This seems unintuitive with my everyday notion of a kite, but Wikipedia says: "If all four sides of a kite have the same length (that is, if the kite is equilateral), it must be a rhombus." Also, the other chart doesn't link the kite to the rhombus at all. I think this chart is easier. – Zebrafish Apr 21 '18 at 2:21
• There's just one thing I see wrong with it, a rectangle is an isosceles trapezoid? Are those two supposed to be connected with a line over on the right? – Zebrafish Apr 21 '18 at 2:49
• "Rectangles and squares are usually considered to be special cases of isosceles trapezoids though some sources would exclude them." Wow, there are a few a discrepancies in definitions, that's not helping. – Zebrafish Apr 21 '18 at 2:52
• @Zebrafish as I mentioned “exclusive definitions” aren’t as useful, they’re harder to state, and make proving things less convenient. – rschwieb Apr 21 '18 at 3:41
• @Zebrafish yes, I would cal a rectangle an isosoles trapezoid. – rschwieb Apr 21 '18 at 3:43
Can't a kite also be a parallelogram, in the case where all sides are equal?
That of course depends on your definition of kite... I've rarely seen the term used at all. You can exclude that case specifically and your tree is then okay.
Wikipedia's Kite (geometry) article seems to include that in their special cases.
EDIT: In that special case, it can also be a rhombus or a square
• A kite that’s also a parallelogram is a rhombus. – rschwieb Apr 21 '18 at 3:45 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731115849662,
"lm_q1q2_score": 0.8570623079832539,
"lm_q2_score": 0.89029422102812,
"openwebmath_perplexity": 830.8944931659497,
"openwebmath_score": 0.7078220248222351,
"tags": null,
"url": "https://math.stackexchange.com/questions/2746757/taxonomy-of-polygons"
} |
# Mathematical Induction PART 2
In Part 1
It shows that
Step 1 is Show it is true for n=1
Step 2 is Show that if n=k is true then n=k+1 is also true
so How to Do it?
Step 1 : * prove* it is true for n=1 (normally)
Step 2 : done in this way normally we can prove it out..
First ~ Assume it is true forn=k
Second ~ Prove it is true for n=k+1 (normally we can use the n=k case as fact )
S (n / k) must be all positive integers including n/k
EXAMPLE
PROVE 1 + 3 + 5 + ... + (2n-1) = n^2
Fisrt : Show it is true for n=1
from LEFT : 1+3+5+.....+(2(1)-1) = 1
from RIGHT n^2 = (1^2) =1 SO 1 = 1^2 is True
SECOND : Assume it is true for n=k
1 + 3 + 5 + ... + (2^k-1) = k^2 is True
(prove it by yourself :) write down your steps in comment )
THIRD : prove it is true for k+1
1 + 3 + 5 + ... + (2^k-1) + (2^(k+1)-1) = (k+1)^2 ... ? (prove it by yourself :) write down your steps in comment )
We know that 1 + 3 + 5 + ... + (2k-1) = k^2 (the assumption above), so we can do a replacement for all but the last term:
k^2 + (2(k+1)-1) = (k+1)^2
THEN expanding all terms:
LEFT : k^2 + 2k + 2 - 1 = k^2 + 2k+1
NEXT simplifying :
RIGHT :k^2 + 2k + 1 = k^2 + 2k + 1
LEFT and RIGHT are the same! So it is true, it is proven.
THEREFORE :
1 + 3 + 5 + ... + (2(k+1)-1) = (k+1)^2 is TRUE!!!!!!
Mathematical Induction IS done !!! :)
Note by Nicole Ling
5 years, 4 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant: | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731147976795,
"lm_q1q2_score": 0.8570623059397442,
"lm_q2_score": 0.8902942159342104,
"openwebmath_perplexity": 3636.1558847293654,
"openwebmath_score": 0.9713510274887085,
"tags": null,
"url": "https://brilliant.org/discussions/thread/mathematical-induction-part-2/"
} |
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
calculus?
- 5 years, 1 month ago
so it is math
- 5 years, 1 month ago
really?
- 5 years, 1 month ago
what
- 5 years, 1 month ago
roar
- 5 years, 1 month ago
feeling lucky
- 5 years, 1 month ago
feeling happy
- 5 years, 1 month ago
- 5 years, 1 month ago
tiger
- 5 years, 1 month ago
is that true?
- 5 years, 1 month ago
yes
- 5 years, 1 month ago
no
- 5 years, 1 month ago
What have you been telling?
- 5 years, 1 month ago
what is your question ? I can't understand what did you want to say through your comments..
- 5 years ago
a
- 5 years, 1 month ago
good
- 5 years, 1 month ago
nice
- 5 years, 1 month ago
× | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731147976795,
"lm_q1q2_score": 0.8570623059397442,
"lm_q2_score": 0.8902942159342104,
"openwebmath_perplexity": 3636.1558847293654,
"openwebmath_score": 0.9713510274887085,
"tags": null,
"url": "https://brilliant.org/discussions/thread/mathematical-induction-part-2/"
} |
## Linear Algebra and Its Applications, Exercise 2.2.12
Exercise 2.2.12. What is a 2 by 3 system of equations $Ax = b$ that has the following general solution?
$x = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}$
Answer: The general solution above is the sum of a particular solution and a homogeneous solution, where
$x_{particular} = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}$
and
$x_{homogeneous} = w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}$
Since $w$ is the only variable referenced in the homogeneous solution it must be the only free variable, with $u$ and $v$ being basic. Since $u$ is basic we must have a pivot in column 1, and since $v$ is basic we must have a second pivot in column 2. After performing elimination on $A$ the resulting echelon matrix $U$ must therefore have the form
$U = \begin{bmatrix} *&*&* \\ 0&*&* \end{bmatrix}$
To simplify solving the problem we can assume that $A$ also has this form; in other words, we assume that $A$ is already in echelon form and thus we don’t need to carry out elimination. The matrix $A$ then has the form
$A = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix}$
where $a_{11}$ and $a_{22}$ are nonzero (because they are pivots).
We then have
$Ax_{homogeneous} = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix} w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = 0$
If we assume that $w$ is 1 and express the right-hand side in matrix form this then becomes
$\begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$
or (expressed as a system of equations)
$\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}a_{11}&+&2a_{12}&+&a_{13}&=&0 \\ &&2a_{22}&+&a_{23}&=&0 \end{array}$ | {
"domain": "hecker.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918533088548,
"lm_q1q2_score": 0.857057791884327,
"lm_q2_score": 0.8670357666736772,
"openwebmath_perplexity": 208.41991238078072,
"openwebmath_score": 0.9118801355361938,
"tags": null,
"url": "https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/"
} |
The pivot $a_{11}$ must be nonzero, and we arbitrarily assume that $a_{11} = 1$. We can then satisfy the first equation by assigning $a_{12} = 0$ and $a_{13} = -1$. The pivot $a_{22}$ must also be nonzero, and we arbitrarily assume that $a_{22} = 1$ as well. We can then satisfy the second equation by assigning $a_{23} = -2$. Our proposed value of $A$ is then
$A = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix}$
so that we have
$Ax_{homogeneous} = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$
as required.
We next turn to the general system $Ax = b$. We now have a value for $A$, and we were given the value of the particular solution. We can multiply the two to calculate the value of $b$:
$b = Ax_{particular} = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$
This gives us the following as an example 2 by 3 system that has the general solution specified above:
$\begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$
or
$\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}u&&&-&w&=&1 \\ &&v&-&2w&=&1 \end{array}$
Finally, note that the solution provided for exercise 2.2.12 at the end of the book is incorrect. The right-hand side must be a 2 by 1 matrix and not a 3 by 1 matrix, so the final value of 0 in the right-hand side should not be present.
NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.
If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books. | {
"domain": "hecker.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918533088548,
"lm_q1q2_score": 0.857057791884327,
"lm_q2_score": 0.8670357666736772,
"openwebmath_perplexity": 208.41991238078072,
"openwebmath_score": 0.9118801355361938,
"tags": null,
"url": "https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/"
} |
This entry was posted in linear algebra. Bookmark the permalink.
### 2 Responses to Linear Algebra and Its Applications, Exercise 2.2.12
1. Daniel says:
I think that your final note is incorrect, due to the fact that if you find the general solution for the system Ax=b that you found, you’ll have to write the solution like Strang does it in (3) page (76). There are three entries on the solution because “x” vector lenght. The general solution (in Matlab notation) is x = [u; v; w] = [1+w; 1+2w; w]= [1; 1; 0] + w*[1; 2; 1]. The general solution he proposed at the begining of the exercise
• hecker says:
My apologies for the delay in responding. Are you referring to my final sentence about the solution to exercise 2.2.12 given on page 476 in the back of the book? If so, I think I may have confused you. I am *not* saying that Strang wrote the general solution incorrectly in the statement of the exercise on page 79, or that Strang found an incorrect solution to the exercise.
Rather my point is as follows: In the statement of the solution on page 476 Strang shows as a solution the same 2 by 3 matrix that I derived above, and Strang shows that 2 by 3 matrix multiplying the vector (u, v, w) just as I do above, representing a system of two equations in three unknowns. However on the right-hand side Strang shows that 2 by 3 matrix multiplying the vector (u, v, w) to produce the vector (1, 1, 0). This cannot be: since the matrix has only two rows, that multiplication would produce a vector with only two elements, not three (as in the book). Those two elements represent the right-hand sides of the corresponding system of two equations.
So the left-hand side in the solution of 2.2.12 on page 476 is correct, but the right-hand side of the solution of 2.2.12 on page 476, namely the vector (1, 1, 0), is not. Instead the right-hand side should be the vector (1, 1) as I derived above. | {
"domain": "hecker.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918533088548,
"lm_q1q2_score": 0.857057791884327,
"lm_q2_score": 0.8670357666736772,
"openwebmath_perplexity": 208.41991238078072,
"openwebmath_score": 0.9118801355361938,
"tags": null,
"url": "https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/"
} |
# What is $\frac{d(\arctan(x))}{dx}$?
Let $v= \arctan{x}$. Now I want to find $\frac{dv}{dx}$. My method is this: Rearranging yields $\tan(v) = x$ and so $dx = \sec^2(v)dv$. How do I simplify from here? Of course I could do something like $dx = \sec^2(\arctan(x))dv$ so that $\frac{dv}{dx} = \cos^2(\arctan(x))$ but I am sure a better expression exists. I am probably just missing some crucial step where we convert one of the trigonometric expressions into an expression involving $x$. Thanks in advance for any help or tips!
-
## 4 Answers
The derivative of $\tan v$ is $1+\tan^2 v$. It will be easier to simplify, since here $v=\arctan x$.
You may check:
$$\sec^2 v = \frac{1}{\cos^2 v} = \frac{\cos^2 v + \sin^2 v}{\cos^2 v} = 1+\tan^2 v$$
Then
$$\mathrm{d}x = (1+\tan^2 v) \ \mathrm{d}v = (1+x^2) \ \mathrm{d}v$$
And
$$\frac{\mathrm{d}v}{\mathrm{d}x}=\frac{1}{1+x^2}$$
-
Perfect thanks a lot, this is very clear. I see now that using $\cos^2(x) = \frac{1}{\tan(x)^2+1}$ will also change the expression $\cos^2(\arctan(x))$ into $\frac{1}{1+x^2}$. Thanks for the answer! – Slugger Nov 28 '13 at 15:24
Another way :
$$\frac{d\arctan x}{dx}=\lim_{h\to0}\frac{\arctan(x+h)-\arctan x}h$$
$$\displaystyle=\lim_{h\to0}\frac{\arctan\frac{x+h-x}{1+(x+h)x}}h$$
$$\displaystyle=\lim_{h\to0}\left(\frac{\arctan\frac h{1+(x+h)x}}{\frac h{1+(x+h)x}}\right)\cdot\frac1{\lim_{h\to0}\{1+(x+h)x\}}=1\cdot\frac1{1+x^2}$$
as $\displaystyle\lim_{u\to0}\frac{\arctan u}u=\lim_{v\to0}\frac v{\tan v}=\lim_{v\to0}\cos v\cdot\frac1{\lim_{v\to0}\frac{\sin v}v}=1\cdot1$
-
Definition of the derivative! +1. – Ahaan S. Rungta Nov 28 '13 at 15:30
Thank you for your answer! It is nice to see how the answer tackle my question in a multitude of ways! – Slugger Nov 28 '13 at 15:34
@Slugger, my pleasure. Please have a look into math.stackexchange.com/questions/579170/… – lab bhattacharjee Nov 28 '13 at 15:46 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918487320493,
"lm_q1q2_score": 0.8570577828217518,
"lm_q2_score": 0.8670357615200474,
"openwebmath_perplexity": 558.4447670146069,
"openwebmath_score": 0.9461610317230225,
"tags": null,
"url": "http://math.stackexchange.com/questions/584655/what-is-fracd-arctanxdx"
} |
You can also use the Inverse Derivative Formula, which states that if $f(x)$ and $g(x)$ are inverse functions, we have $$g'(x) = \dfrac {1}{f'(g(x))}.$$So, if $g(x)=\arctan x$, our task is to find $g'(x)$. In that case, we have $f(x)=\tan x$, which gives us $f'(x)=sec^2 x$, so we can substitute: \begin {align*} g'(x) &= \dfrac {1}{f'(g(x))} \\&= \dfrac {1}{\sec^2 (g(x))} \\&= \dfrac {1}{\sec^2 (\arctan x)}. \end {align*}We can find $\sec (\arctan x)$ geometrically. Consider a right triangle with legs of length $x$, $1$, and $\sqrt{1+x^2}$. Let $\theta$ be the angle opposite to the leg of length $x$. Then, $$\sec \left( \arctan x \right) = \sec (\theta) = \sqrt {1+x^2},$$ so our answer is $$\dfrac {1}{\left( \sqrt{1+x^2} \right)^2} = \boxed {\dfrac {1}{1+x^2}}.$$
-
Thanks you for your answer! – Slugger Nov 28 '13 at 15:35
You're welcome! :) – Ahaan S. Rungta Nov 28 '13 at 15:36
$$v=\arctan(x)\Rightarrow x=\tan v\Rightarrow x'=\frac{1}{(\tan v)'}=\frac{1}{(\frac{\sin v}{\cos v})'}=\frac{1}{\frac{1}{\cos^2 v}}=\cos^2 v=\frac{\cos^2 v}{1}$$ $$=\frac{\cos^2 v}{\cos^2 v+\sin^2 v}=\frac{\frac{\cos^2 v}{\cos^2 v}}{\frac{\cos^2 v+\sin^2 v}{\cos^2 v}}=\frac{1}{1+\tan^2 v}=\frac{1}{1+x^2}$$ i.e $$v'=(\arctan(x))'=\frac{1}{1+x^2}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918487320493,
"lm_q1q2_score": 0.8570577828217518,
"lm_q2_score": 0.8670357615200474,
"openwebmath_perplexity": 558.4447670146069,
"openwebmath_score": 0.9461610317230225,
"tags": null,
"url": "http://math.stackexchange.com/questions/584655/what-is-fracd-arctanxdx"
} |
-
I am not sure I follow exactly. Your second equation says $x=\tan v$ and then you follow by saying $x' =\frac{1}{(\tan v)'}$... Maybe I am missing something – Slugger Nov 28 '13 at 15:27
Sir, my solution is correct, You can also use the Inverse Derivative Formula, – Madrit Zhaku Nov 28 '13 at 15:31
Oh, it seems this is essentially what I did, but I posted a few minutes later. I didn't see your post when I posted, so it's not that I copied. Should I delete my post? – Ahaan S. Rungta Nov 28 '13 at 15:32
@MadritZhaku When somebody asks you to explain what you did, "it is correct" is not as helpful as actually explaining what you did. If you don't have the patience to elaborate, don't respond at all. Your comment came off quite rude. – Ahaan S. Rungta Nov 28 '13 at 15:33
all is well that ends well :) – Slugger Nov 28 '13 at 15:37 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918487320493,
"lm_q1q2_score": 0.8570577828217518,
"lm_q2_score": 0.8670357615200474,
"openwebmath_perplexity": 558.4447670146069,
"openwebmath_score": 0.9461610317230225,
"tags": null,
"url": "http://math.stackexchange.com/questions/584655/what-is-fracd-arctanxdx"
} |
This is called the eigendecomposition. Learn that the eigenvalues of a triangular matrix are the diagonal entries. » If they are numeric, eigenvalues are sorted in order of decreasing absolute value. Computing Eigenvalues and Eigenvectors Eigenvalue Problems Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well. Thx in advance!. 1 Let A be an n × n matrix. Eigenvalues and the characteristic. This means the only eigenvalue is 0, and every nonzero plynomial is an eigenvector, so the eigenspace of eigenvalue 0 is the whole space V. To better understand these concepts, let's consider the following situation. I need some help with the following problem please? Let A be a 3x3 matrix with eigenvalues -1,0,1 and corresponding eigenvectors l1l. There is a hope. *XP the eigenvalues up to a 4*4 matrix can be calculated. Requirements: The program should… (Use your code from programming assignment 7 for items 1 through 4) 1. A normal matrix is de ned to be a matrix M, s. Solution: Ax = x )x = A 1x )A 1x = 1 x (Note that 6= 0 as Ais invertible implies that det(A) 6= 0). Any help is greatly appreciated. Matrices and Eigenvectors It might seem strange to begin a section on matrices by considering mechanics, but underlying much of matrix notation, matrix algebra and terminology is the need to describe the physical world in terms of straight lines. λ 1, λ 2, λ 3, …, λ p. The Eigenvalues from a Covariance matrix inform us about the directions (read: principal components) along which the data has the maximum spread. It decomposes matrix using LU and Cholesky decomposition. Hi, I'm having trouble with finding the eigenvectors of a 3x3 matrix. 2 FINDING THE EIGENVALUES OF A MATRIX Consider an n£n matrix A and a scalar ‚. In my earlier posts, I have already | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
EIGENVALUES OF A MATRIX Consider an n£n matrix A and a scalar ‚. In my earlier posts, I have already shown how to find out eigenvalues and the corresponding eigenvectors of a matrix. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. The eigen-values are di erent for each C, but since we know the eigenvectors they are easy to diagonalize. The calculator will perform symbolic calculations whenever it is possible. is nonsingular, and hence invertible. (In fact, the eigenvalues are the entries in the diagonal matrix D {\displaystyle D} (above), and therefore D {\displaystyle D} is uniquely determined by A {\displaystyle A} up to the order of its entries. The eigenvalues and eigenvectors of a matrix may be complex, even when the matrix is real. With this installment from Internet pedagogical superstar Salman Khan's series of free math tutorials, you'll learn how to determine the eigenvalues of 3x3 matrices in eigenvalues. Eigenvectors of Rin C2 for the eigenvalues iand iare i 1 and 1, respectively. Given a matrix A, recall that an eigenvalue of A is a number λ such that Av = λ v for some vector v. We give two different proofs. I've already tried to use the EigenvalueDecomposition from Accord. With the program EIGENVAL. Eigenvalues [ m, UpTo [ k]] gives k eigenvalues, or as many as are available. Theorem EDELI Eigenvectors with Distinct Eigenvalues are Linearly Independent. You have 3 vector equations Au1=l1u1 Au2=l2u2 Au3=l3u3 Consider the matrix coefficients a11,a12,a13, etc as unknowns. I'm having a problem finding the eigenvectors of a 3x3 matrix with given eigenvalues. the eigenvalues of a triangular matrix (upper or lower triangular) are the entries on the diagonal. One approach is to raise the matrix to a high power. The diagonal matrix D contains eigenvalues. 2 MATH 2030: EIGENVALUES AND EIGENVECTORS De nition 0. The others are not eigenvectors. Let us rearrange the eigenvalue equation to the form , where represents a | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
are not eigenvectors. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). This matrix calculator computes determinant , inverses, rank, characteristic polynomial , eigenvalues and eigenvectors. While the matrix representing T is basis dependent, the eigenvalues and eigenvectors are not. Let p(t) be the […] Determine Whether Given Matrices are Similar (a) Is the. Also, the eigenvalues and eigenvectors satisfy (A - λI)X r = 0 r. Eigenvalues and Eigenvectors. The first one is a simple one - like all eigenvalues are real and different. That is, the eigenvectors are the vectors that the linear transformation A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. Free Matrix Eigenvectors calculator - calculate matrix eigenvectors step-by-step This website uses cookies to ensure you get the best experience. And we used the fact that lambda is an eigenvalue of A, if and only if, the determinate of lambda times the identity matrix-- in this case it's a 2 by 2 identity matrix-- minus A is equal to 0. Consider the two-dimensional vectors a and b shown here. It decomposes matrix using LU and Cholesky decomposition. The problem is I don't know how to write (1,0,0) as a lineair combination of my eigenvectors. Note that the multiplication on the left hand side is matrix multiplication (complicated) while the mul-. Shouldn't it? $\endgroup$ – Janos Jun 27 '19 at 10:04. And that says, any value, lambda, that satisfies this equation for v is a non-zero vector. (9-4) Hence, the eigenspace associated with eigenvalue λ is just the kernel of (A - λI). It decomposes matrix using LU and Cholesky decomposition. Theorem EDELI Eigenvectors with Distinct Eigenvalues are Linearly Independent. The l =1 eigenspace for the matrix 2 6 6 4 2 1 3 4 0 2 1 3 2 1 6 5 1 2 4 8 3 7 7 5 is two-dimensional. Find the eigenvalues and eigenvectors. First, form the matrix. If you're seeing this message, it means | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
the eigenvalues and eigenvectors. First, form the matrix. If you're seeing this message, it means we're having trouble loading external resources on our website. There will be an eigenvalue corresponding to each eigenvector of a matrix. Given any square matrix A ∈ M n(C),. 5 of the textbook. Eigenvalues and Eigenvectors. I have a 3x3 covariance matrix (so, real, symmetric, dense, 3x3), I would like it's principal eigenvector, and speed is a concern. proc iml; x = hadamard(16); call eigen(val, vec, x); print (val) vec[format=5. EIGENVALUES AND EIGENVECTORS Definition 7. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,,. A non-square matrix A does not have eigenvalues. The eigenvalues of a selfadjoint matrix are always real. Philip Petrov ( https://cphpvb. I need some help with the following problem please? Let A be a 3x3 matrix with eigenvalues -1,0,1 and corresponding eigenvectors l1l. Finding a set of matrices based on eigenvalues and eigenvectors with constraints. [email protected] 224 CHAPTER 7. To calculate the eigenvalues and eigenvector of the Hessian, you would first calculate the Hessian (a symmetric 3x3 matrix, containing the second derivatives in each of the 3 directions) for each pixel. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y H A = λy H (y H implies a complex conjugate transposition of y). the coefficients, which, in order to have non-zero solutions, has to have a singular matrix (zero determinant). Also, the method only tells you how to find the largest eigenvalue. Call you eigenvectors u1,u2,u3. Find a basis for this | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
tells you how to find the largest eigenvalue. Call you eigenvectors u1,u2,u3. Find a basis for this eigenspace. Eigenvectors and eigenspaces for a 3x3 matrix. Get 1:1 help now from expert Advanced Math tutors. exp(xA) is a fundamental matrix for our ODE Repeated Eigenvalues When an nxn matrix A has repeated eigenvalues it may not have n linearly independent eigenvectors. As an example, in the case of a 3 X 3 Matrix and a 3-entry column vector,. is nonsingular, and hence invertible. In the last video we set out to find the eigenvalues values of this 3 by 3 matrix, A. matrix then det(A−λI) = 0. 2 Eigenvalues and Eigenvectors of the power Matrix. It decomposes matrix using LU and Cholesky decomposition. Finding a set of matrices based on eigenvalues and eigenvectors with constraints. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. Eigenvalues code in Java. An eigenvalue of a matrix is nothing but a special scalar that is used in the multiplication of matrices and is of great importance in physics as well. Eigenvalues [ m, spec] is always equivalent to Take [ Eigenvalues [ m], spec]. The eigenvalues are 4; 1; 4(4is a double root), exactly the diagonal elements. I have a 3x3 real symmetric matrix, from which I need to find the eigenvalues. The matrix of this transformation is the 6 6 all-zero matrix (in arbitrary basis). Obtain the Eigenvectors and Eigenvalues from the covariance matrix or correlation matrix, or perform Singular Value Decomposition. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. the coefficients, which, in order to have non-zero solutions, has to have a singular matrix (zero determinant). Nth power of a square matrix and the Binet Formula for Fibonacci sequence Yue Kwok Choy Given A= 4 −12 −12 11. The eigenvalues and eigenvectors of a matrix are scalars and vectors such that. 2 Eigenvalues and Eigenvectors (cont’d) | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
eigenvectors of a matrix are scalars and vectors such that. 2 Eigenvalues and Eigenvectors (cont’d) Example. The associated eigenvectors can now be found. Matrix A: 0 -6 10-2 12 -20-1 6 -10 I got the eigenvalues of: 0, 1+i, and 1-i. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. The zero vector 0 is never an eigenvectors, by definition. Spectral theorem: For a normal matrix M2L(V), there exists an. Eigenvectors and Eigenvalues When a random matrix A acts as a scalar multiplier on a vector X, then that vector is called an eigenvector of X. If you can draw a line through the three points (0, 0), v and Av, then Av is just v multiplied by a number λ; that is, Av = λv. Eigenvector and Eigenvalue. We take an example matrix from a Schaum's Outline Series book Linear Algebra (4 th Ed. Example: The Hermitian matrix below represents S x +S y +S z for a spin 1/2 system. Find the eigenvectors and the corresponding eigenvalues of T T T. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,,. Eigenvalues and Eigenvectors. Any help is greatly appreciated. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. If you love it, our example of the solution to eigenvalues and eigenvectors of 3×3 matrix will help you get a better understanding of it. The eigenvalue with the largest absolute value is called the dominant eigenvalue. 1; Lecture 13: Basis=? For A 3X3 Matrix: Ex. We have A= 5 2 2 5 and eigenvalues 1 = 7 2 = 3 The sum of the eigenvalues 1 + 2 = 7+3 = 10 is equal to the sum of the diagonal entries of the matrix Ais 5 + 5 = 10. Today we’re going to talk about a special type of symmetric matrix, called a positive definite matrix. We recall that a scalar l Î F is said to be an | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
symmetric matrix, called a positive definite matrix. We recall that a scalar l Î F is said to be an eigenvalue (characteristic value, or a latent root) of A, if there exists a nonzero vector x such that Ax = l x, and that such an x is called an eigen-vector (characteristic vector, or a latent vector) of A corresponding to the eigenvalue l and that the pair (l, x) is called an. The determinant will be computed by performing a Laplace expansion along the second row: The roots of the characteristic equation, are clearly λ = −1 and 3, with 3 being a double root; these are the eigenvalues of B. p ( t) = − ( t − 2) ( t − 1) ( t + 1). If you have trouble understanding your eigenvalues and eigenvectors of 3×3 matrix. You can use this to find out which of your. EIGENVALUES AND EIGENVECTORS 6. Here det (A) is the determinant of the matrix A and tr(A) is the trace of the matrix A. Eigenvalues [ m, - k] gives the k that are smallest in absolute value. These straight lines may be the optimum axes for describing rotation of a. For background on these concepts, see 7. Similar matrices always has the same eigenvalues, but their eigenvectors could be different. Tied eigenvalues make the problem of reliably returning the same eigenvectors even more interesting. 366) •eigenvectors corresponding to distinct eigenvalues are orthogonal (→TH 8. In this case, they are the measure of the data's covariance. eig computes eigenvalues and eigenvectors of a square matrix. is the characteric equation of A, and the left part of it is called characteric polynomial of A. SOLUTION: • In such problems, we first find the eigenvalues of the matrix. The characteristic polynomial (CP) of an nxn matrix A A is a polynomial whose roots are the eigenvalues of the matrix A A. The picture is more complicated, but as in the 2 by 2 case. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,,. - Duration: 4:53. Logical matrices are coerced to numeric. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. Equation (1) is the eigenvalue equation for the matrix A. First, form the matrix. Eigenvalues and Eigenvectors of a Matrix Description Calculate the eigenvalues and corresponding eigenvectors of a matrix. Eigen vector, Eigen value 3x3 Matrix Calculator. Eigenvalues and eigenvectors - physical meaning and geometric interpretation applet Introduction. Today Courses Practice Algebra Geometry Number Theory Calculus Probability Find the eigenvalues of the matrix A = (8 0 0 6 6 11 1 0 1). Those are the “eigenvectors”. Eigenvalues and Eigenvectors. Eigenvalues are simply the coefficients attached to eigenvectors, which give the axes magnitude. Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. To find the eigenvectors and eigenvalues for a 3x3 matrix. Title: Eigenvalues and Eigenvectors of the Matrix of Permutation Counts Authors: Pawan Auorora , Shashank K Mehta (Submitted on 16 Sep 2013 ( v1 ), last revised 20 Sep 2013 (this version, v2)). The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. Compute eigenvectors and corresponding eigenvalues Sort the eigenvectors by decreasing eigenvalues and choose eigenvectors with the largest eigenvalues to form a dimensional matrix (where every column represents an eigenvector) Use this eigenvector matrix to transform the samples onto the new subspace. (1) The story begins in finding the eigenvalue(s) and eigenvector(s) of A. This procedure will lead you to a homogeneus 3x3 system w. Shio Kun for Chinese translation. But | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
A. This procedure will lead you to a homogeneus 3x3 system w. Shio Kun for Chinese translation. But for the eigenvectors, it is, since the denominator is going to be (nearly) zero. Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. To find the eigenvalues, we need to minus lambda along the main diagonal and then take the determinant, then solve for lambda. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales. eig computes eigenvalues and eigenvectors of a square matrix. The eigenvalues and eigenvectors of a matrix have the following important property: If a square n n matrix A has n linearly independent eigenvectors then it is diagonalisable, that is, it can be factorised as follows A= PDP 1 where D is the diagonal matrix containing the eigenvalues of A along the diagonal, also written as D = diag[l 1;l 2;:::;l n]. [i 1]t, for any nonzero scalar t. In the present case, since we are dealing with a 3 X 3 Matrix and a 3-entry column vector,. The eigenvalues and eigenvectors of a matrix are scalars and vectors such that. The zero vector 0 is never an eigenvectors, by definition. They are used in a variety of data science techniques such as Principal Component Analysis for dimensionality reduction of features. oregonstate. The l =2 eigenspace for the matrix 2 4 3 4 2 1 6 2 1 4 4 3 5 is two. And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next big test). The eigenvectors are a lineal combination of atomic movements, which indicate global movement of the proteins (the essential deformation modes), while the associated eigenvalues indicate the expected displacement along each eigenvector in frequencies (or distance units if the Hessian is not mass-weighted), that is, the impact of each deformation movement in the. MATRIX NORMS 219 Moreover, if A is an m × n matrix and B is an n × m matrix, it is not hard | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
in the. MATRIX NORMS 219 Moreover, if A is an m × n matrix and B is an n × m matrix, it is not hard to show that tr(AB)=tr(BA). This is called the eigendecomposition. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. Find more Mathematics widgets in Wolfram|Alpha. Eigenvalues are simply the coefficients attached to eigenvectors, which give the axes magnitude. I know that I need to work backwards on this problem so I set up the characteristic equation. In MATLAB eigenvalues and eigenvectors of matrices can be calculated by command eig. In the last video we set out to find the eigenvalues values of this 3 by 3 matrix, A. technique for computing the eigenvalues and eigenvectors of a matrix, converging superlinearly with exponent 2 +. If X is a unit vector, λ is the length of the vector produced by AX. As noted above the eigenvalues of a matrix are uniquely determined, but for each eigenvalue there are many eigenvectors. 1) where F0 is the free energy at the stationary point, x is a column matrix whose entries xi (i=1,2,…n). Learn that the eigenvalues of a triangular matrix are the diagonal entries. Write the system in matrix form as Equivalently, (A nonhomogeneous system would look like. The result is a 3x1 (column) vector. ) c) This is very easy to see. l2l Find A. using the Cayley-Hamilton theorem. Example 4 Suppose A is this 3x3 matrix: [1 1 0] [0 2 0] [0 -1 2]. The eigenspaces corresponding to these matrices are orthogonal to each other, though the eigenvalues can still be complex. Theorem If A is an matrix and is a eigenvalue of A, then the set of all eigenvectors of , together with the zero vector, forms a subspace of. These vectors are eigenvectors of A, and these numbers are eigenvalues of A. p ( t) = − ( t − 2) ( t − 1) ( t + 1). For example, suppose we wish to solve the. The eigenvalues are 4; 1; 4(4is a double root), exactly the diagonal elements. Let A be the matrix given by A | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
are 4; 1; 4(4is a double root), exactly the diagonal elements. Let A be the matrix given by A = [− 2 0 1 − 5 3 a 4 − 2 − 1] for some variable a. By definition of the kernel, that. square matrix and S={x1,x2,x3,…,xp} S = { x 1, x 2, x 3, …, x p } is a set of eigenvectors with eigenvalues λ1,λ2,λ3,…,λp. ; Solve the linear system (A - I 3) v = 0 by finding the reduced row echelon form of A - I 3. Since W x = l x then (W- l I) x = 0. Any help is greatly appreciated. Eigenvectors and eigenvalues with numpy. Easycalculation. The characteristic polynomial (CP) of an nxn matrix A A is a polynomial whose roots are the eigenvalues of the matrix A A. In Section 5. We can't expect to be able to eyeball eigenvalues and eigenvectors everytime. l2l Find A. EIGENVALUES AND EIGENVECTORS 6. As we sometimes have to diagonalize a matrix to get the eigenvectors and eigenvalues, for example diagonalization of Hessian(translation, rotation projected out) matrix, we can get the. Eigenvalues/vectors are instrumental to understanding electrical circuits, mechanical systems, ecology and even Google's PageRank algorithm. Call you matrix A. If X is a unit vector, λ is the length of the vector produced by AX. It also includes an analysis of a 2-state Markov chain and a discussion of the Jordan form. The first one is a simple one – like all eigenvalues are real and different. A scalar λ is said to be a eigenvalue of A, if Ax = λx for some vector x 6= 0. We need to get to the bottom of what the matrix A is doing to. For a unique set of eigenvalues to determinant of the matrix (W-l I) must be equal to zero. Eigenvalues [ m, spec] is always equivalent to Take [ Eigenvalues [ m], spec]. Call you matrix A. EIGENVALUES AND EIGENVECTORS 6. (3,2,4) and (0,-1,1) are eigenvectors. For Example, if x is a vector that is not zero, then it is an eigenvector of a square matrix A, if Ax is a scalar multiple of x. In machine learning, eigenvectors and eigenvalues come up quite a bit. Example 5 Suppose A is this | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
In machine learning, eigenvectors and eigenvalues come up quite a bit. Example 5 Suppose A is this 3x3 matrix: [ 0 0 2] [-3 1 6] [ 0 0 1]. Finding eigenvectors of a 3x3 matrix 2. Find the eigenvalues and bases for each eigenspace. These straight lines may be the optimum axes for describing rotation of a. The determinant will be computed by performing a Laplace expansion along the second row: The roots of the characteristic equation, are clearly λ = −1 and 3, with 3 being a double root; these are the eigenvalues of B. Right when you reach $0$, the eigenvalues and eigenvectors become real (although there is only eigenvector at this point). Enter a matrix. If you're behind a web filter, please make sure that the domains *. l0l l0l ; l1l ; l1l respectively. 2; Lecture 14: Basis=? For A 2X2 Matrix; Lecture 15: Basis=? For A 3X3 Matrix: 1/3; Lecture 16: Basis. [email protected] This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. Repeated Eigenvalues We conclude our consideration of the linear homogeneous system with constant coefficients x Ax' (1) with a brief discussion of the case in which the matrix has a repeated eigenvalue. The first one is a simple one - like all eigenvalues are real and different. Find all values of a which will guarantee that A has eigenvalues 0, 3, and − 3. I guess A is 3x3, so it has 9 coefficients. 1 Let A be an n × n matrix. Linear Algebra: Introduction to Eigenvalues and Eigenvectors. λ is an eigenvalue (a scalar) of the Matrix [A] if there is a non-zero vector (v) such that the following relationship is satisfied: [A](v) = λ (v) Every vector (v) satisfying this equation is called an eigenvector of [A] belonging to the eigenvalue λ. eig returns a tuple (eigvals,eigvecs) where eigvals is a 1D NumPy array of complex numbers giving the eigenvalues of. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. The | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
for Teams is a private, secure spot for you and your coworkers to find and share information. The eigenvalue-eigenvector problem for A is the problem of nding numbers and vectors v 2R3 such that Av = v : If , v are solutions of a eigenvector-eigenvalue problem then the vector v is called an eigenvector of A and is called an eigenvalue of A. if TRUE, the matrix is assumed to be symmetric (or Hermitian if complex) and only its lower triangle (diagonal included) is used. Let's see if visualization can make these ideas more intuitive. Show that the eigenvalues of A are real. Also, the method only tells you how to find the largest eigenvalue. Today Courses Practice Algebra Geometry Number Theory Calculus Probability Find the eigenvalues of the matrix A = (8 0 0 6 6 11 1 0 1). Eigenvalues [ m, - k] gives the k that are smallest in absolute value. The calculator will perform symbolic calculations whenever it is possible. The eigenvalues are r1=r2=-1, and r3=2. Learn more Algorithm for finding Eigenvectors given Eigenvalues of a 3x3 matrix in C#. As is to be expected, Maple's. A simple way to do this is to apply three gradient filters (in. DA: 66 PA: 49 MOZ Rank: 16. For the purpose of analyzing Hessians, the eigenvectors are not important, but the eigenvalues are. The Eigenvector which corresponds to the maximum Eigenvalue of the Covariance matrix, C, will be the. Eigenvalues and Eigenvectors on Brilliant, the largest community of math and science problem solvers. (1) The eigenvalues of a triangle matrix are its diagonal elements. We also review eigenvalues and eigenvectors. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. Note that the multiplication on the left hand side is matrix multiplication (complicated) while the mul-. To be able to show that a given matrix is orthogonal. We were transforming a vector of points v into another set of points v R by multiplying by. Use [W,D] = eig(A. Linear Algebra: | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
of points v into another set of points v R by multiplying by. Use [W,D] = eig(A. Linear Algebra: Introduction to Eigenvalues and Eigenvectors. Eigenvectors and eigenspaces for a 3x3 matrix. 1) can be rewritten. 2]; quit; This Hadamard matrix has 8 eigenvalues equal to 4 and 8 equal to -4. Share a link to this answer. For a Hermitian matrix the eigenvalues should be real. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. Computing the eigenvectors of a 3x3 symmetric matrix in C++ Every once in a while Google makes me wonder how people ever managed to do research 15 years ago. But Bcan have at most n linearly independent eigenvectors, so the eigenvalues obtained in this way must be all of B’s eigenvalues. Try to find the eigenvalues and eigenvectors of the following matrix:. Eigenvector and Eigenvalue. But the problem is I can't write (1,0,0) as a combination of those eigenvectors. Eigenvalues and Eigenvectors, Imaginary and Real. Remember that the solution to. In the last video, we started with the 2 by 2 matrix A is equal to 1, 2, 4, 3. A matrix is diagonalizable if it has a full set of eigenvectors. It will be tedious for hand computation. Since W x = l x then (W- l I) x = 0. There could be multiple eigenvalues and eigenvectors for a symmetric and square matrix. Resize; Like. Namely, prove that (1) the determinant of A is the product of its eigenvalues, and (2) the trace of A is the sum of the eigenvalues. It will then compute the eigenvalues (real and complex) and eigenvectors (real and complex) for that matrix. Eigenvalues of the said matrix [ 2. 3 Eigenvalues and Eigenvectors. You have 3x3=9 linear equations for nine unknowns. a numeric or complex matrix whose spectral decomposition is to be computed. They have many uses! A simple example is that an eigenvector does not change direction in a transformation:. 1 3 4 5 , l = 1 11. For background on these concepts, see 7. The eigenvalues correspond to rows in | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
1 3 4 5 , l = 1 11. For background on these concepts, see 7. The eigenvalues correspond to rows in the eigenvector matrix. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Show transcribed image text. The zero vector 0 is never an eigenvectors, by definition. A real number λ is said to be an eigenvalue of a matrix A if there exists a non-zero column vector v such that A. Most of the methods on this website actually describe the programming of matrices. Just write down two generic diagonal matrices and you will see that they must. And that says, any value, lambda, that satisfies this equation for v is a non-zero vector. For now, I have original matrix in 2D array, I have eigenvalues in variables, and I have second matrix that has result of Eigenvalue*I - A (eigenvalue times matrix that has 1 on diagonal minus original matrix) So my form for now is lets say in example:-1 0 -1 v1 0-2 0 -2 v2 = 0-1 0 -1 v3 0. *XP the eigenvalues up to a 4*4 matrix can be calculated. Matrix D is the canonical form of A--a diagonal matrix with A's eigenvalues on the main diagonal. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. Eigenvectors of repeated eigenvalues. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales. Specify the eigenvalues The eigenvalues of matrix $\mathbf{A}$ are thus $\lambda = 6$, $\lambda = 3$, and $\lambda = 7$. The normalized eigenvector for = 5 is: The three eigenvalues and eigenvectors now can be recombined to give the solution to the original 3x3 matrix as shown in Figures 8. The above equation is called the eigenvalue. That example demonstrates a very important concept in engineering and science - eigenvalues and. [As to follow the definition the zero vector i. Find all values of a which will guarantee that A has eigenvalues 0, 3, and − 3. Any value of λ for which this equation has a solution is known as an | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
has eigenvalues 0, 3, and − 3. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. 1 (Eigenvalue, eigenvector) Let A be a complex square matrix. com is the most convenient free online Matrix Calculator. The l =2 eigenspace for the matrix 2 4 3 4 2 1 6 2 1 4 4 3 5 is two-dimensional. Example solving for the eigenvalues of a 2x2 matrix. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. (3,2,4) and (0,-1,1) are eigenvectors. Below, change the columns of A and drag v to be an. a numeric or complex matrix whose spectral decomposition is to be computed. Introduction. An eigenvector associated with λ1 is a nontrivial solution~v1 to (A λ1I)~v =~0: (B. The calculator will perform symbolic calculations whenever it is possible. Finding eigenvectors of a 3x3 matrix 2. *XP the eigenvalues up to a 4*4 matrix can be calculated. The determinant will be computed by performing a Laplace expansion along the second row: The roots of the characteristic equation, are clearly λ = −1 and 3, with 3 being a double root; these are the eigenvalues of B. Eigenvalues and eigenvectors in Maple Maple has commands for calculating eigenvalues and eigenvectors of matrices. Its roots are 1 = 1+3i and 2 = 1 = 1 3i: The eigenvector corresponding to 1 is ( 1+i;1). 4 A symmetric matrix: € A. Substitute one eigenvalue λ into the equation A x = λ x—or, equivalently, into ( A − λ I) x = 0—and solve for x; the resulting nonzero solutons form the set of eigenvectors of A corresponding to the selectd eigenvalue. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. A matrix is diagonalizable if it has a full set of eigenvectors. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. is a linearly independent set. If W is | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
v is a column vector of length n, and λ is a scalar. is a linearly independent set. If W is a matrix such that W'*A = D*W', the columns of W are the left eigenvectors of A. Your matrix is Hermitian - look up "Rayleigh quotient iteration" to find its eigenvalues and eigenvectors. Solution: Ax = x )x = A 1x )A 1x = 1 x (Note that 6= 0 as Ais invertible implies that det(A) 6= 0). Recall as well that the eigenvectors for simple eigenvalues are linearly independent. Now we need to get the matrix into reduced echelon form. Eigenvectors of repeated eigenvalues. A = 1 u 1 u 1 T u 1 T u 1 − 2 u 2 u 2 T u 2 T u 2 + 2 u 3 u 3 T u 3 T u 3. xla is an addin for Excel that contains useful functions for matrices and linear Algebra: Norm, Matrix multiplication, Similarity transformation, Determinant, Inverse, Power, Trace, Scalar Product, Vector Product, Eigenvalues and Eigenvectors of symmetric matrix with Jacobi algorithm, Jacobi's rotation matrix. This representation turns out to be enormously useful. Title: Eigenvalues and Eigenvectors of the Matrix of Permutation Counts Authors: Pawan Auorora , Shashank K Mehta (Submitted on 16 Sep 2013 ( v1 ), last revised 20 Sep 2013 (this version, v2)). The first one is a simple one - like all eigenvalues are real and different. You have 3 vector equations Au1=l1u1 Au2=l2u2 Au3=l3u3 Consider the matrix coefficients a11,a12,a13, etc as unknowns. Learn to find complex eigenvalues and eigenvectors of a matrix. Earlier on, I have also mentioned that it is possible to get the eigenvalues by solving the characteristic equation of the matrix. 369) EXAMPLE 1 Orthogonally diagonalize. The normalized eigenvector for = 5 is: The three eigenvalues and eigenvectors now can be recombined to give the solution to the original 3x3 matrix as shown in Figures 8. For a Hermitian matrix the eigenvalues should be real. If you're seeing this message, it means we're having trouble loading external resources on our website. this expression for A is called the | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
we're having trouble loading external resources on our website. this expression for A is called the spectral decomposition of a symmetric matrix. Find a basis for this eigenspace. This can be factored to. The calculator will perform symbolic calculations whenever it is possible. Namely, prove that (1) the determinant of A is the product of its eigenvalues, and (2) the trace of A is the sum of the eigenvalues. The eigenvalues are r1=r2=-1, and r3=2. 2 examples are given : first the eigenvalues of a 4*4 matrix is calculated. For example, suppose we wish to solve the. Repeated Eigenvalues We conclude our consideration of the linear homogeneous system with constant coefficients x Ax' (1) with a brief discussion of the case in which the matrix has a repeated eigenvalue. You have 3 vector equations Au1=l1u1 Au2=l2u2 Au3=l3u3 Consider the matrix coefficients a11,a12,a13, etc as unknowns. oregonstate. Ask Question Asked 2 years, 8 months ago. True A 3x3 matrix can have a nonreal complex eigenvalue with multiplicity 2. Browse other questions tagged linear-algebra matrices eigenvalues-eigenvectors or ask your own question. Finding eigenvectors of a 3x3 matrix 2. Learn to find complex eigenvalues and eigenvectors of a matrix. Eigenvalues and eigenvectors calculator. It decomposes matrix using LU and Cholesky decomposition. I guess A is 3x3, so it has 9 coefficients. Prove that the diagonal elements of a triangular matrix are its eigenvalues. Determining the eigenvalues of a 3x3 matrix. This is called the eigendecomposition. Equation (1) can be stated equivalently as (A − λ I) v = 0 , {\displaystyle (A-\lambda I)v=0,} (2) where I is the n by n identity matrix and 0 is the zero vector. We all know that for any 3 × 3 matrix, the number of eigenvalues is 3. This calculator allows to find eigenvalues and eigenvectors using the Characteristic polynomial. If you have trouble understanding your eigenvalues and eigenvectors of 3×3 matrix. Matrix A: 0 -6 10-2 12 -20-1 6 -10 I got the | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
your eigenvalues and eigenvectors of 3×3 matrix. Matrix A: 0 -6 10-2 12 -20-1 6 -10 I got the eigenvalues of: 0, 1+i, and 1-i. We know that the row space of a matrix is orthogonal to its null space, then we can compute the eigenvector(s) of an eigenvalue by verifying the linear independence of the. 860) by computing Av/l and confirming that it equals v. net) for Bulgarian translation. real()" to get rid of the imaginary part will give the wrong result (also, eigenvectors may have an arbitrary complex phase!). Complex eigenvalues and eigenvectors of a matrix. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. We have A= 5 2 2 5 and eigenvalues 1 = 7 2 = 3 The sum of the eigenvalues 1 + 2 = 7+3 = 10 is equal to the sum of the diagonal entries of the matrix Ais 5 + 5 = 10. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. The Mathematics Of It. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). Eigenvalues of the said matrix [ 2. Repeated Eigenvalues Occasionally when we have repeated eigenvalues, we are still able to nd the correct number of linearly independent eigenvectors. Judging from the name covmat, I'm assuming you are feeding a covariance matrix, which is symmetric (or hermitian. 1 $\begingroup$ My question is Eigenvalue/eigenvector reordering and/or renormalisation? 0. Eigenvalues and Eigenvectors Calculator for 3x3 Matrix easycalculation. Since the zero-vector is a solution, the system is consistent. Theorem 11. Here we can confirm the eigenvalue/eigenvector pair l=-. These straight lines may be the optimum axes for describing rotation of a. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x`. Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. 1) then v is an eigenvector of the linear transformation A and | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
the eigenvectors for each eigenvalue. 1) then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. We all know that for any 3 × 3 matrix, the number of eigenvalues is 3. I only found 2 eigenvectors cos l2=l3. Let's consider a simple example with a diagonal matrix: A = np. Hot Network Questions Quick way to find the square root of 123. EigenValues is a special set of scalar values, associated with a linear system of matrix equations. And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next big test). Follow the next steps for calulating the eigenvalues (see the figures) 1: make a 4*4 matrix [A] and fill the rows and colums with the numbers. MATRIX NORMS 219 Moreover, if A is an m × n matrix and B is an n × m matrix, it is not hard to show that tr(AB)=tr(BA). First, form the matrix. In each case determine which vectors are eigenvectors and identify the associated eigenvalues. The Overflow Blog Defending yourself against coronavirus scams. Lambda represents a scalar value. This problem has been solved! See the answer. Lecture 7: Given The Eigenvector, Eigenvalues=? Lecture 8: Eigenvector=? Of A 3X3 Matrix; Lecture 9: Bases And Eigenvalues: 1; Lecture 10: Bases And Eigenvalues: 2; Lecture 11: Basis=? For A 2X2 Matrix; Lecture 12: Basis=? For A 3X3 Matrix: Ex. Using MatLab to find eigenvalues, eigenvectors, and unknown coefficients of initial value problem. Here A is a matrix, v is an eigenvector, and lambda is its corresponding eigenvalue. Note: The two unknowns can also be solved for using only matrix manipulations by starting with the initial conditions and re-writing: Now it is a simple task to find γ 1 and γ 2. Hi, I'm having trouble with finding the eigenvectors of a 3x3 matrix. In my earlier posts, I have already shown how to find out eigenvalues and the corresponding eigenvectors of a matrix. By definition of the | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
how to find out eigenvalues and the corresponding eigenvectors of a matrix. By definition of the kernel, that. The eigenvalue is the factor which the matrix is expanded. It's the eigenvectors that determine the dimensionality of a system. Thanks for the A2A… Eigenvalues and the Inverse of a matrix If we take the canonical definition of eigenvectors and eigenvalues for a matrix, $M$, and further assume that $M$ is invertible, so there exists, [math]M^{-1}[/math. The eigenvalues are numbers, and they’ll be the same for Aand B. ) by Seymour Lipschutz and Marc. If is a diagonal matrix with the eigenvalues on the diagonal, and is a matrix with the eigenvectors as its columns, then. *XP the eigenvalues up to a 4*4 matrix can be calculated. (1) The eigenvalues of a triangle matrix are its diagonal elements. Is there a fast algorithm for this specific problem? I've seen algorithms for calculating all the eigenvectors of a real symmetric matrix, but those routines seem to be optimized for large matrices, and I don't care. Diagonal matrix. I'm trying to calculate eigenvalues and eigenvectors of a 3x3 hermitian matrix (named coh). The numerical computation of eigenvalues and eigenvectors is a challenging issue, and must be be deferred until later. The roots are lambda 1 equals 1, and lambda 2 equals 3. It decomposes matrix using LU and Cholesky decomposition. The eigenvalues of a triangular matrix are the entries on the main diagonal. I am new to Mathematica so I am not very familiar with the syntax and I can not find out what is wrong with my code. degree polynomial. square matrix and S={x1,x2,x3,…,xp} S = { x 1, x 2, x 3, …, x p } is a set of eigenvectors with eigenvalues λ1,λ2,λ3,…,λp. 2]; quit; This Hadamard matrix has 8 eigenvalues equal to 4 and 8 equal to -4. Then if λ is a complex number and X a non–zero com-plex column vector satisfying AX = λX, we call X an eigenvector of A, while λ is called an eigenvalue of A. Learn to find complex eigenvalues and eigenvectors of a | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
of A, while λ is called an eigenvalue of A. Learn to find complex eigenvalues and eigenvectors of a matrix. Multiply an eigenvector by A, and the. Eigenvalues [ m, - k] gives the k that are smallest in absolute value. Determining the eigenvalues of a 3x3 matrix Linear Algebra: Eigenvectors and Eigenspaces for a 3x3 matrix Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. , the covariance matrix of a random vector)), then all of its eigenvalues are real, and all of its eigenvectors are orthogonal. Linear Algebra: Introduction to Eigenvalues and Eigenvectors. oregonstate. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. Ask Question Asked 2 years, 8 months ago. (a) Set T: R2!R2 to be the linear transformation represented by the matrix 2 0 0 3. 1 Let A be an n × n matrix. , the covariance matrix of a random vector)), then all of its eigenvalues are real, and all of its eigenvectors are orthogonal. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. I only found 2 eigenvectors cos l2=l3. Today Courses Practice Algebra Geometry Number Theory Calculus Probability Find the eigenvalues of the matrix A = (8 0 0 6 6 11 1 0 1). If you're seeing this message, it means we're having trouble loading external resources on our website. In the last video we set out to find the eigenvalues values of this 3 by 3 matrix, A. The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y H A = λy H (y H implies a complex conjugate transposition of y). 366) •A is orthogonally diagonalizable, i. We take an example matrix from a Schaum's Outline Series book Linear Algebra (4 th Ed. The matrix is (I have a ; since I can't have a space between each column. There could be | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
(4 th Ed. The matrix is (I have a ; since I can't have a space between each column. There could be multiple eigenvalues and eigenvectors for a symmetric and square matrix. As the eigenvalues of are ,. The eigenvalue is the factor which the matrix is expanded. Form the matrix A − λI , that is, subtract λ from each diagonal element of A. To begin, let v be a vector (shown as a point) and A be a matrix with columns a1 and a2 (shown as arrows). Note that the multiplication on the left hand side is matrix multiplication (complicated) while the mul-. 1} it is straightforward to show that if $$\vert v\rangle$$ is an eigenvector of $$A\text{,}$$ then, any multiple $$N\vert v\rangle$$ of $$\vert v\rangle$$ is also an eigenvector since the (real or complex) number \(N. The eigenvalues and eigenvectors of a matrix may be complex, even when the matrix is real. We need to get to the bottom of what the matrix A is doing to. They have many uses! A simple example is that an eigenvector does not change direction in a transformation:. Hermitian Matrix giving non-real eigenvalues. , a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. Eigenvectors are a special set of vectors associated with a linear system of equations (i. Form the matrix A − λI , that is, subtract λ from each diagonal element of A. Assuming K = R would make the theory more complicated. For a square matrix A, an Eigenvector and Eigenvalue make this equation true (if we can find them):. The vector x is called an eigenvector corresponding to λ. To compute the Transpose of a 3x3 Matrix, CLICK HERE. [i 1]t, for any nonzero scalar t. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. The calculator will perform symbolic calculations whenever it is possible. Earlier on, I have also mentioned | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
will perform symbolic calculations whenever it is possible. Earlier on, I have also mentioned that it is possible to get the eigenvalues by solving the characteristic equation of the matrix. Since doing so results in a determinant of a matrix with a zero column, $\det A=0$. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. Linear Algebra: Eigenvalues of a 3x3 matrix. The eigenvectors corresponding to di erent eigenvalues need not be orthogonal. *XP the eigenvalues up to a 4*4 matrix can be calculated. Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. is a linearly independent set. But I want to use the class Jama for the calculation of the eigenvalues and eigenvectors, but I do not know how to use it, could anyone give me a hand? Thanks. If you love it, our example of the solution to eigenvalues and eigenvectors of 3×3 matrix will help you get a better understanding of it. det ( A − λ I) = 0. By ranking your eigenvectors in order of their eigenvalues, highest to lowest, you get the principal components in order of significance. If W is a matrix such that W'*A = D*W', the columns of W are the left eigenvectors of A. We all know that for any 3 × 3 matrix, the number of eigenvalues is 3. I'm having a problem finding the eigenvectors of a 3x3 matrix with given eigenvalues. Eigenvalues, Eigenvectors, and Diagonal-ization Math 240 Eigenvalues and Eigenvectors Diagonalization Complex eigenvalues Find all of the eigenvalues and eigenvectors of A= 2 6 3 4 : The characteristic polynomial is 2 2 +10. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. Lambda represents a scalar value. I need some help with the following problem please? | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
Lambda represents a scalar value. I need some help with the following problem please? Let A be a 3x3 matrix with eigenvalues -1,0,1 and corresponding eigenvectors l1l. The eigenvalues of a hermitian matrix are real, since (λ − λ)v = (A * − A)v = (A − A)v = 0 for a non-zero eigenvector v. Not too bad. You can use this to find out which of your. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. Calculate the eigenvalues and the corresponding eigenvectors of the matrix. Our general strategy was: Compute the characteristic polynomial. The second examples is about a 3*3 matrix. Let us consider an example of two matrices, one of them is a diagonal one, and another is similar to it: A = {{1, 0, 0}, {0, 2, 0}, {0, 0, 0. Notice: If x is an eigenvector, then tx with t6= 0 is also an eigenvector. 6 Prove that if the cofactors don't all vanish they provide a column eigenvector. How to find the Eigenvalues of a 3x3 Matrix - Duration: 3:56. array ( [ [ 1, 0 ], [ 0, -2 ]]) print (A) [ [ 1 0] [ 0 -2]] The function la. Find more Mathematics widgets in Wolfram|Alpha. You might be stuck with thrashing through an algebraic. Eigenvalues and Eigenvectors, Imaginary and Real. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. real()" to get rid of the imaginary part will give the wrong result (also, eigenvectors may have an arbitrary complex phase!). Linear Algebra: Eigenvalues of a 3x3 matrix. The columns of V present eigenvectors of A. Our general strategy was: Compute the characteristic polynomial. Take for example 0 @ 3 1 2 3 1 6 2 2 2 1 A One can verify that the eigenvalues of this matrix are = 2;2; 4. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. Eigenvector and Eigenvalue. 224 CHAPTER 7. An eigenvector associated with λ1 is a nontrivial solution~v1 to (A λ1I)~v =~0: (B. The eigenvalues of A are given by the roots of the polynomial | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
solution~v1 to (A λ1I)~v =~0: (B. The eigenvalues of A are given by the roots of the polynomial det(A In) = 0: The corresponding eigenvectors are the nonzero solutions of the linear system (A In)~x = 0: Collecting all solutions of this system, we get the corresponding eigenspace. If you have trouble understanding your eigenvalues and eigenvectors of 3×3 matrix. Eigenvalues and Eigenvectors. 2, replacing ${\bb v}^{(j)}$ by ${\bb v}^{(j)}-\sum_{k\neq j} a_k {\bb v}^{(k)}$ results in a matrix whose determinant is the same as the original matrix. Eigenvectors of repeated eigenvalues. Let A be an n×n matrix and let λ1,…,λn be its eigenvalues. The matrix looks like this |0 1 1| A= |1 0 1| |1 1 0| When I try to solve for the eigenvectors I end up with a 3x3 matrix containing all 1's and I get stumped there. If the matrix A is symmetric then •its eigenvalues are all real (→TH 8. Description of Lab: Your program will ask the user to enter a 3x3 matrix. Maths with Jay 35,790 views. org/math/linear-algebra/alternate_bases/eigen_everything/v/linear-. We therefore saw that they were all real. 7 Choose a random 3 by 3 matrix and find an eigenvalue and corresponding eigenvector. As is to be expected, Maple's. This condition will give you the eigenvalues and then, solvning the system for each eigenvalue, you will find the eigenstates. This matrix calculator computes determinant , inverses, rank, characteristic polynomial , eigenvalues and eigenvectors. v is an eigenvector with associated eigenvalue 3. I'm having a problem finding the eigenvectors of a 3x3 matrix with given eigenvalues. Since v is non-zero, the matrix is singular, which means that its determinant is zero. I can find the eigenvector of the eigenvalue 0, but for the complex eigenvalues, I keep on getting the reduced row echelon form of:. Find all values of a which will guarantee that A has eigenvalues 0, 3, and − 3. Question: Find The Eigenvalues And Eigenvectors Of Matrices 3x3 This problem has been solved! See the | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
Find The Eigenvalues And Eigenvectors Of Matrices 3x3 This problem has been solved! See the answer. Observation: det (A - λI) = 0 expands into an kth degree polynomial equation in the unknown λ called the characteristic equation. At this special case, all vectors as still rotated counterclockwise except those in the direction of $(0,1)$ (which is the eigenvector). This calculator allows you to enter any square matrix from 2x2, 3x3, 4x4 all the way up to 9x9 size. Find the eigenvalues and eigenvectors. 3 4 4 8 Solution. In Section 5. Scaling your VPN overnight Finding Eigenvectors of a 3x3 Matrix (7. Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. 1 3 4 5 , l = 1 11. The vector x is called an eigenvector corresponding to λ. Eigenvalues and Eigenvectors. Exactly one option must be correct). find the eigenvalues and eigenvectors of matrices 3x3. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. The normalized eigenvector for = 5 is: The three eigenvalues and eigenvectors now can be recombined to give the solution to the original 3x3 matrix as shown in Figures 8. Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. Equation (1) can be stated equivalently as (A − λ I) v = 0 , {\displaystyle (A-\lambda I)v=0,} (2) where I is the n by n identity matrix and 0 is the zero vector. They are used in a variety of data science techniques such as Principal Component Analysis for dimensionality reduction of features. The result is a 3x1 (column) vector. In this tutorial, we will explore NumPy's numpy. 1) can be rewritten. - Jonas Aug 16 '11 at 3:12. Given a matrix A, recall that an eigenvalue of A is a number λ such that Av = λ v for some vector v. If A is real, there is an orthonormal basis for R n consisting of | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
that Av = λ v for some vector v. If A is real, there is an orthonormal basis for R n consisting of eigenvectors of A if and only if A is symmetric. This polynomial is called the characteristic polynomial. Question: How do you determine eigenvalues of a 3x3 matrix? Eigenvalues: An eigenvalue is a scalar {eq}\lambda {/eq} such that Ax = {eq}\lambda {/eq}x for a nontrivial x. In linear algebra the characteristic vector of a square matrix is a vector which does not change its direction under the associated linear transformation. The zero vector 0 is never an eigenvectors, by definition. In this python tutorial, we will write a code in Python on how to compute eigenvalues and vectors. Let A be the matrix given by A = [− 2 0 1 − 5 3 a 4 − 2 − 1] for some variable a. trace()/3) -- note that (in exact math) this shifts the eigenvalues but does not influence the eigenvectors. It will then compute the eigenvalues (real and complex) and eigenvectors (real and complex) for that matrix. Determining the eigenvalues of a 3x3 matrix Linear Algebra: Eigenvectors and Eigenspaces for a 3x3 matrix Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. For the following matrices find (a) all eigenvalues (b) linearly independent eigenvectors for each eigenvalue (c) the algebraic and geometric multiplicity for each eigenvalue and state whether the matrix is diagonalizable. Enter a matrix. Definition 4. Diagonalizable Matrices. Quiz % 1)Simplify 2) Similarly, the characteristic equation of a 3x3 matrix: Eigenvalues or, can be written as well as Find eigenvalues and eigenvectors of matrix. Maths with Jay 35,790 views. Find more Mathematics widgets in Wolfram|Alpha. 2 Vectors that maintain their orientation when multiplied by matrix A D Eigenvalues: numbers (λ) that provide solutions for AX = λX. net) for Bulgarian translation. I can find the eigenvector of the | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
provide solutions for AX = λX. net) for Bulgarian translation. I can find the eigenvector of the eigenvalue 0, but for the complex eigenvalues, I keep on getting the reduced row echelon form of:. | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
1v7l3vc886zlg2, m83gip4j73vjfp, o5m33ui5xk4r4g, d76u869uc05fv, gbo71a9euqdvt, du2sldoup68h, 8wu1rxtgoc7tng0, o7uvrhbdlti2, ay9wjkki4za5y, olesufgei95yk, bs98c7ai3z00, 8tzdyariin4c7h, t1kt2p3udiq, s9o8ylnuu7ft83, ks0tprhj6stqjd, q1dh0lb81aofsm, ukci9e944q, ajxzbl7lregmo1, n5h8l6ozp41yg, 7pb60d55luftbpv, i10r3fzynjgw, l0gavghjyl1b6t, 74ssaxe7i8vfqyx, xlux7x1ozv, smnjnrqshi, 5pnzldwnn2ig, jernunvmrjcuv, 1emb6psqm3, 3yhxk3w0palopu3 | {
"domain": "svc2006.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8570577824463963,
"lm_q2_score": 0.8670357598021707,
"openwebmath_perplexity": 349.6453176432852,
"openwebmath_score": 0.8708940148353577,
"tags": null,
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html"
} |
Mathematics
OpenStudy (anonymous):
Prove that: $if:\mathop {\lim }\limits_{n \to \infty } {a_n} = a$ then $\mathop {\lim }\limits_{n \to \infty } \frac{{{a_1} + 2{a_2} + ... + n{a_n}}} {{{n^2}}} = \frac{a} {2}$
OpenStudy (anonymous):
Can I use L'hospital Rule Like that: $\mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {k{a_k}} }} {{{n^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {{a_k}} }} {{2n}} = \mathop {\lim }\limits_{n \to \infty } \frac{{an}} {{2n}} = \frac{a} {2}$ Is it a rigorous way to prove that? Please tell me your thought. It will be a appreciated.
OpenStudy (anonymous):
i am thinking. maybe can prove it directly?
myininaya (myininaya):
we can use l'hospital
OpenStudy (anonymous):
I don't know how to do that, I want to use the precise definiition of series limit.
OpenStudy (anonymous):
you need epsilon, N proof? because it looks like you should get $a\sum_{k=1}^nk=\frac{a(n(n+1)}{2}$
OpenStudy (anonymous):
OpenStudy (anonymous):
In fact, it is a question from mathematical analysis. So yes, epsilon, N proof will be best.
myininaya (myininaya):
$\lim_{n \rightarrow \infty}\frac{a_1+a_2+\cdot \cdot \cdot n a_n}{n^2} \cdot \frac{\frac{1}{n^2}}{\frac{1}{n^2}}$$=\lim_{n \rightarrow \infty}\frac{\frac{a_1}{n^2}+\frac{a_2}{n^2}+ \cdot \cdot \cdot +\frac{n a_n}{n^2}}{\frac{n^2}{n^2}}$ $=\lim_{n \rightarrow \infty}\frac{0+0+\cdot \cdot \cdot +\frac{1}{n}a_n}{1}=\lim_{n \rightarrow \infty}\frac{a_n}{n}=0$ ? how can we show its $\frac{a}{2}$
OpenStudy (anonymous):
The answer from textbook is a/2. And I saw a guy use l'hospital rule in that way, so i just copy his way....I hope It work. But The way use l'hospital rule like that is really confused... AND If you can give the " epsilon-N " proof, that will be best...
OpenStudy (anonymous):
Take your time. I have go to work now. See you.
myininaya (myininaya):
i'm saying i can't prove that because it doesn't = a/2 i get 0 see above | {
"domain": "questioncove.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918519527645,
"lm_q1q2_score": 0.8570577788217987,
"lm_q2_score": 0.8670357546485407,
"openwebmath_perplexity": 2256.058497394396,
"openwebmath_score": 0.9999911785125732,
"tags": null,
"url": "https://questioncove.com/updates/4e56f2e10b8b9ebaa8948ee6"
} |
myininaya (myininaya):
i'm saying i can't prove that because it doesn't = a/2 i get 0 see above
myininaya (myininaya):
and the way i used above wasn't l'hospital
OpenStudy (zarkon):
it is a/2
myininaya (myininaya):
what did i do wrong then
OpenStudy (zarkon):
you caont do what you did above
myininaya (myininaya):
myininaya (myininaya):
i see
myininaya (myininaya):
its because i'm forgeting the terms before na_n
OpenStudy (zarkon):
yes
myininaya (myininaya):
like (n-1)a_{n-1}
OpenStudy (zarkon):
yes
myininaya (myininaya):
and so on...
OpenStudy (zarkon):
yes
myininaya (myininaya):
lol
myininaya (myininaya):
i will let you prove it
myininaya (myininaya):
because i know you want to badly
myininaya (myininaya):
lol
OpenStudy (anonymous):
why isn't it $\lim_{n\rightarrow \infty}\frac{an(n+1)}{2n^2}=\frac{a}{2}$?
OpenStudy (zarkon):
how do you factor out an a?
OpenStudy (zarkon):
that is what it looks like you are doing...replacing a_n with a
OpenStudy (zarkon):
the L'Hospitals rule use by prost in his first post is incorrect too
myininaya (myininaya):
wheres that?
OpenStudy (zarkon):
this ... $\mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {k{a_k}} }} {{{n^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {{a_k}} }} {{2n}} = \mathop {\lim }\limits_{n \to \infty } \frac{{an}} {{2n}} = \frac{a} {2}$
OpenStudy (zarkon):
an epsilon based proof is the way to go I believe
OpenStudy (anonymous): | {
"domain": "questioncove.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918519527645,
"lm_q1q2_score": 0.8570577788217987,
"lm_q2_score": 0.8670357546485407,
"openwebmath_perplexity": 2256.058497394396,
"openwebmath_score": 0.9999911785125732,
"tags": null,
"url": "https://questioncove.com/updates/4e56f2e10b8b9ebaa8948ee6"
} |
OpenStudy (zarkon):
an epsilon based proof is the way to go I believe
OpenStudy (anonymous):
I am back.. Hard Day.. Does anyone heard Stolz Theorem? Use it, it said: $\mathop {\lim }\limits_{n \to \infty } \frac{{{a_1} + 2{a_2} + ... + n{a_n}}} {{{n^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{n{a_n}}} {{{n^2} - {{\left( {n - 1} \right)}^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{an}} {{2n - 1}} = \frac{a} {2}$ and foreget my first proof above. It makes people confused and make no sense. The Stolz Theorem is very useful. But i still want the " epsilon-N " proof. Can anyone prove it? | {
"domain": "questioncove.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918519527645,
"lm_q1q2_score": 0.8570577788217987,
"lm_q2_score": 0.8670357546485407,
"openwebmath_perplexity": 2256.058497394396,
"openwebmath_score": 0.9999911785125732,
"tags": null,
"url": "https://questioncove.com/updates/4e56f2e10b8b9ebaa8948ee6"
} |
# How many ways to of place $1, 2, 3, \dots, 9$ in a circle so the sum of any three consecutive numbers is divisible by $3.$
Determine the number of ways of placing the numbers $$1, 2, 3, \dots, 9$$ in a circle, so that the sum of any three numbers in consecutive positions is divisible by $$3.$$ (Two arrangements are considered the same if one arrangement can be rotated to obtain the other.)
I've experimented with possible combinations and found that it works when we put a multiple of 3 next to a number one more than a multiple of three beside a number that is two more than a multiple of 3. If we continue with this pattern around the circle, it works.
However, I'm curious in finding a more systematic approach than listing out all different combinations.
• Well consider the numbers in position $k$ and $k+3$ will have to be congruent $\mod 3$.. – fleablood Jul 8 at 5:08
In general, suppose we have the numbers $$1,2,\dots,3n$$, and we would like to place them in a circle so that the sum of any three consecutive terms is divisible by $$3$$.
Observe that the numbers at positions $$k$$ and $$k+3$$ must always be congruent modulo $$3$$. Thus we can partition the points along the circle into three sets which stand for the residues modulo $$3$$ of the positions. If we fix the number $$1$$ at, say, position $$1$$, then this tells us that every point at position $$3k+1$$ has residue $$1$$ modulo $$3$$.
Now we have a choice: either the numbers at positions $$3k+2$$ have residue $$0$$, or they have residue $$2$$. Either way, note that each of the three "partition classes" can be arranged in $$n!$$ different ways, giving us $$2(n!)^3$$ possibilities.
In this particular case, $$n=3$$, and the answer is $$432$$ (if rotations are counted as the same). | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918516137419,
"lm_q1q2_score": 0.8570577768297469,
"lm_q2_score": 0.867035752930664,
"openwebmath_perplexity": 151.29742150008124,
"openwebmath_score": 0.8086175918579102,
"tags": null,
"url": "https://math.stackexchange.com/questions/3749429/how-many-ways-to-of-place-1-2-3-dots-9-in-a-circle-so-the-sum-of-any-thre/3749438"
} |
In this particular case, $$n=3$$, and the answer is $$432$$ (if rotations are counted as the same).
• If rotations are the same we can assume position $1$ has $3$. Then positions $4$ and $9$ have $6$ and $9$. there are $2$ ways to do that. position $2$ is either $\pm 1\pmod 3$. There are $2$ choices. position $2\equiv position 5\equiv position 8$ so there are $3!$ ways to do that. And $position 3\equiv position 6\equiv \position 9$ so there are $3!$ ways to do that. So there are $2*2*6*6=144$ ways if rotations are counted the same. If reflections are counted the same there are $2*6*6=72$ ways. – fleablood Jul 8 at 5:37
Brain storming. If we label the number positions as $$a_1,.....,a_9$$ then $$a_k+a_{k+1} + a_{k+2} \equiv 0 \equiv a_{k+1} + a_{k+2} + a_{k+3}\pmod 3$$ so $$a_k\equiv a_{k+3}\pmod 3$$.
The there are only three equivalence classes each with $$3$$ elements and so $$a_3, a_6, a_9$$ must all contain elements from one equivalence class. There are $$3$$ choices of which class and $$3!$$ ways to place the elements. $$a_1, a_4, a_7$$ must also contain elements from one equivalence class and there are $$2$$ choices of classes and $$3!$$ ways to arrange them. and for $$a_2, a_5, a_8$$ there is one choice of classe and $$3!$$ ways to arrange them.
So there $$3*3!*2*3!*1*3! = 6^4$$ ways to do this.
As rotations are considered the same (but not mirror symmetries???) divide by $$9$$.
So the answer is $$\frac {6^4}9$$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918516137419,
"lm_q1q2_score": 0.8570577768297469,
"lm_q2_score": 0.867035752930664,
"openwebmath_perplexity": 151.29742150008124,
"openwebmath_score": 0.8086175918579102,
"tags": null,
"url": "https://math.stackexchange.com/questions/3749429/how-many-ways-to-of-place-1-2-3-dots-9-in-a-circle-so-the-sum-of-any-thre/3749438"
} |
So the answer is $$\frac {6^4}9$$.
• Rotations are the same, so I think this overcounts by a factor of 3 – boink Jul 8 at 5:18
• Since we're placing them in a circle, it's possible we ought to divide by 9 or maybe 18 to get rid of the symmetric options (equivalently, require $a_1=1$, and possibly divide by 2). It's hard to tell, though. – Arthur Jul 8 at 5:19
• @fleablood I just meant that the problem statement says they're the same – boink Jul 8 at 5:22
• "Since we're placing them in a circle" Placing the them in a circle means that $a_1$ is congruent to $a_8$. It DOESN"T mean that rotations are considered to be the same any more than Mr. Left in a word problem means you need to subtract. But if rotations are the same then divide by $9$. If symmetry is considered the same divide by $18$. – fleablood Jul 8 at 5:22
• "I just meant that the problem statement says they're the same" Oh, I didn't see that part. That's one of my pet peeve. If Mr. Left works $8$ hours a day, $5$ days a week how many hours a week does he work. Answer: Well, since the problem contains the word "left" that means we subtract so the answer is $8-5=3$. And question. How many ways are there to place people are a table. Answer: As the problem contains the word "table" and tables are circles rotations are the same.... No, they aren't unless the question says they are. – fleablood Jul 8 at 5:28 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918516137419,
"lm_q1q2_score": 0.8570577768297469,
"lm_q2_score": 0.867035752930664,
"openwebmath_perplexity": 151.29742150008124,
"openwebmath_score": 0.8086175918579102,
"tags": null,
"url": "https://math.stackexchange.com/questions/3749429/how-many-ways-to-of-place-1-2-3-dots-9-in-a-circle-so-the-sum-of-any-thre/3749438"
} |
# mod Distributive Law, factoring $\!\!\bmod\!\!:$ $\ ab\bmod ac = a(b\bmod c)$
I stumbled across this problem
Find $$\,10^{\large 5^{102}}$$ modulo $$35$$, i.e. the remainder left after it is divided by $$35$$
Beginning, we try to find a simplification for $$10$$ to get: $$10 \equiv 3 \text{ mod } 7\\ 10^2 \equiv 2 \text{ mod } 7 \\ 10^3 \equiv 6 \text{ mod } 7$$
As these problems are meant to be done without a calculator, calculating this further is cumbersome. The solution, however, states that since $$35 = 5 \cdot 7$$, then we only need to find $$10^{5^{102}} \text{ mod } 7$$. I can see (not immediately) the logic behind this. Basically, since $$10^k$$ is always divisible by $$5$$ for any sensical $$k$$, then: $$10^k - r = 5(7)k$$ But then it's not immediately obvious how/why the fact that $$5$$ divides $$10^k$$ helps in this case.
My question is, is in general, if we have some mod system with $$a^k \equiv r \text{ mod } m$$ where $$m$$ can be decomposed into a product of numbers $$a \times b \times c \ \times ...$$, we only need to find the mod of those numbers where $$a, b, c.....$$ doesn't divides $$a$$? (And if this is the case why?) If this is not the case, then why/how is the solution justified in this specific instance?
• How is $10 \equiv 1$ mod 7? – Junglemath May 2 '19 at 13:13
• @Junglemath, idk.... – q.Then May 14 '20 at 7:17
The "logic" is that we can use a mod distributive law to pull out a common factor $$\,c=5,\,$$ i.e.
$$ca\bmod cn =\, c(a\bmod n)\quad\qquad$$
This decreases the modulus from $$\,cn\,$$ to $$\,n, \,$$ simplifying modular arithmetic. Also it may eliminate CRT = Chinese Remainder Theorem calculations, eliminating needless inverse computations, which are much more difficult than above for large numbers (or polynomials, e.g. see this answer).
This distributive law is often more convenient in congruence form, e.g.
$$\quad \qquad ca\equiv c(a\bmod n)\ \ \ {\rm if}\ \ \ \color{#d0f}{cn\equiv 0}\ \pmod{\! m}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211561049158,
"lm_q1q2_score": 0.8570108790929958,
"lm_q2_score": 0.8791467785920306,
"openwebmath_perplexity": 946.4025963228933,
"openwebmath_score": 0.996309220790863,
"tags": null,
"url": "https://math.stackexchange.com/questions/2059752/mod-distributive-law-factoring-bmod-ab-bmod-ac-ab-bmod-c"
} |
$$\quad \qquad ca\equiv c(a\bmod n)\ \ \ {\rm if}\ \ \ \color{#d0f}{cn\equiv 0}\ \pmod{\! m}$$
because we have: $$\,\ c(a\bmod n) \equiv c(a\! +\! kn)\equiv ca+k(\color{#d0f}{cn})\equiv ca\pmod{\!m}$$
e.g. in the OP: $$\ \ I\ge 1\,\Rightarrow\, 10^{\large I+N}\!\equiv 10^{\large I}(10^{\large N}\!\bmod 7)\ \ \ {\rm by}\ \ \ 10^I 7\equiv 0\,\pmod{35}$$
Let's use that. First note that exponents on $$10$$ can be reduced mod $$\,6\,$$ by little Fermat,
i.e. notice that $$\ \color{#c00}{{\rm mod}\,\ 7}\!:\,\ 10^{\large 6}\equiv\, 1\,\Rightarrow\, \color{#c00}{10^{\large 6J}\equiv 1}.\$$ Thus if $$\ I \ge 1\$$ then as above
$$\phantom{{\rm mod}\,\ 35\!:\,\ }\color{#0a0}{10^{\large I+6J}}\!\equiv 10^{\large I} 10^{\large 6J}\!\equiv 10^{\large I}(\color{#c00}{10^{\large 6J}\!\bmod 7})\equiv \color{#0a0}{10^{\large I}}\,\pmod{\!35}$$
Our power $$\ 5^{\large 102} = 1\!+\!6J\$$ by $$\ {\rm mod}\,\ 6\!:\,\ 5^{\large 102}\!\equiv (-1)^{\large 102}\!\equiv 1$$
Therefore $$\ 10^{\large 5^{\large 102}}\!\! = \color{#0a0}{10^{\large 1+6J}}\!\equiv \color{#0a0}{10^{\large 1}} \pmod{\!35}\$$
Remark $$\$$ For many more worked examples see the complete list of linked questions. Often this distributive law isn't invoked by name. Rather its trivial proof is repeated inline, e.g. from a recent answer, using $$\,cn = 14^2\cdot\color{#c00}{25}\equiv 0\pmod{100}$$
\begin{align}&\color{#c00}{{\rm mod}\ \ 25}\!:\ \ \ 14\equiv 8^{\large 2}\Rightarrow\, 14^{\large 10}\equiv \overbrace{8^{\large 20}\equiv 1}^{\rm\large Euler\ \phi}\,\Rightarrow\, \color{#0a0}{14^{\large 10N}}\equiv\color{#c00}{\bf 1}\\[1em] &{\rm mod}\ 100\!:\,\ 14^{\large 2+10N}\equiv 14^{\large 2}\, \color{#0a0}{14^{\large 10N}}\! \equiv 14^{\large 2}\!\! \underbrace{(\color{#c00}{{\bf 1} + 25k})}_{\large\color{#0a0}{14^{\Large 10N}}\!\bmod{\color{#c00}{25}}}\!\!\! \equiv 14^{\large 2} \equiv\, 96\end{align}
This distributive law is actually equivalent to CRT as we sketch below, with $$\,m,n\,$$ coprime | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211561049158,
"lm_q1q2_score": 0.8570108790929958,
"lm_q2_score": 0.8791467785920306,
"openwebmath_perplexity": 946.4025963228933,
"openwebmath_score": 0.996309220790863,
"tags": null,
"url": "https://math.stackexchange.com/questions/2059752/mod-distributive-law-factoring-bmod-ab-bmod-ac-ab-bmod-c"
} |
This distributive law is actually equivalent to CRT as we sketch below, with $$\,m,n\,$$ coprime
\begin{align} x&\equiv a\!\!\!\pmod{\! m}\\ \color{#c00}x&\equiv\color{#c00} b\!\!\!\pmod{\! n}\end{align} $$\,\Rightarrow\, x\!-\!a\bmod mn\, =\, m\left[\dfrac{\color{#c00}x-a}m\bmod n\right] = m\left[\dfrac{\color{#c00}b-a}m\bmod n\right]$$
which is exactly the same form solution given by Easy CRT. But the operational form of this law often makes it much more convenient to apply in computations versus the classical CRT formula.
Fractional extension It easily extends to fractions, e.g. from here
Notice $$\,\ \dfrac{\color{#c00}{11}}{35}\bmod \color{#c00}{11}(9)\,=\, \color{#c00}{11}(\color{#0a0}8)\,$$ by $$\color{#0a0}{\bmod 9\!:\ \dfrac{1}{35}\equiv \dfrac{1}{-1}\equiv 8},\$$ via
Theorem $$\ \ \dfrac{\color{#c00}ab}d\bmod \color{#c00}ac\, =\, \color{#c00}a\left(\color{#0a0}{\dfrac{b}d\bmod c}\right)\ \$$ if $$\ \ (d,ac) = 1$$
Proof $$\,$$ Bezout $$\Rightarrow$$ exists $$\, d' \equiv d^{-1}\pmod{\!ac}.\,$$ Factoring out $$\,\color{#c00}a\,$$ by mDL
$$\color{#c00}abd'\bmod \color{#c00}ac\, =\ \color{#c00}a(bd'\bmod c)\qquad\qquad\qquad$$
and $$\,dd' \equiv 1\pmod{\!ac}\Rightarrow dd' \equiv 1\pmod{\!c},\,$$ so $$\,d'\bmod c = d^{-1}\bmod c$$
First, note that $10^{7}\equiv10^{1}\pmod{35}$.
Therefore $n>6\implies10^{n}\equiv10^{n-6}\pmod{35}$.
Let's calculate $5^{102}\bmod6$ using Euler's theorem:
• $\gcd(5,6)=1$
• Therefore $5^{\phi(6)}\equiv1\pmod{6}$
• $\phi(6)=\phi(2\cdot3)=(2-1)\cdot(3-1)=2$
• Therefore $\color\red{5^{2}}\equiv\color\red{1}\pmod{6}$
• Therefore $5^{102}\equiv5^{2\cdot51}\equiv(\color\red{5^{2}})^{51}\equiv\color\red{1}^{51}\equiv1\pmod{6}$
Therefore $10^{5^{102}}\equiv10^{5^{102}-6}\equiv10^{5^{102}-12}\equiv10^{5^{102}-18}\equiv\ldots\equiv10^{1}\equiv10\pmod{35}$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211561049158,
"lm_q1q2_score": 0.8570108790929958,
"lm_q2_score": 0.8791467785920306,
"openwebmath_perplexity": 946.4025963228933,
"openwebmath_score": 0.996309220790863,
"tags": null,
"url": "https://math.stackexchange.com/questions/2059752/mod-distributive-law-factoring-bmod-ab-bmod-ac-ab-bmod-c"
} |
Carrying on from your calculation: \begin{align} 10^3&\equiv 6 \bmod 7 \\ &\equiv -1 \bmod 7 \\ \implies 10^6 = (10^3)^2&\equiv 1 \bmod 7 \end{align} We could reach the same conclusion more quickly by observing that $7$ is prime so by Fermat's Little Theorem, $10^{(7-1)}\equiv 1 \bmod 7$.
So we need to know the value of $5^{102}\bmod 6$, and here again $5\equiv -1 \bmod 6$ so $5^{\text{even}}\equiv 1 \bmod 6$. (Again there are other ways to the same conclusion, but spotting $-1$ is often useful).
Thus $10^{\large 5^{102}}\equiv 10^{6k+1}\equiv 10^1\equiv 3 \bmod 7$.
Now the final step uses the Chinese remainder theorem for uniqueness of the solution (to congruence): \left .\begin{align} x&\equiv 0 \bmod 5 \\ x&\equiv 3 \bmod 7 \\ \end{align} \right\}\implies x\equiv 10 \bmod 35 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211561049158,
"lm_q1q2_score": 0.8570108790929958,
"lm_q2_score": 0.8791467785920306,
"openwebmath_perplexity": 946.4025963228933,
"openwebmath_score": 0.996309220790863,
"tags": null,
"url": "https://math.stackexchange.com/questions/2059752/mod-distributive-law-factoring-bmod-ab-bmod-ac-ab-bmod-c"
} |
# Probability of failure of a light bulb in years
Let's assume we have a lightbulb with a maximum lifespan 4 years. We are asked to create a transition matrix (Markov chain theory) for the bulb. The bulb is checked once a year and if it's found that the bulb does not work it is replaced by a new one.
We know that the probabilities of failure during the 4 year period are: 0.2, 0.4, 0.3 and 0.1. After four years, the lightbulb is replaced with probability 1. So we have the following 4 states:
• $$S_0$$ - the lightbulb is new
• $$S_1$$ - the lightbulb is 1 year old
• $$S_2$$ - the lightbulb is 2 years old
• $$S_3$$ - the lightbulb is 3 years old
How to create the probability transition matrix? For the first year, it's clear. The bulb goes dead with probability $$p_{0,0} = 0.2$$ and it keeps working with probability $$p_{0,1} = 0.8$$. But I am not sure how to calculate to the following years. In the materials for my course, I found the following calculation:
$$p_{2,1} = \frac{0.4}{0.8} \, , p_{2,3} = \frac{0.3+0.1}{0.8} = 0.5 \, , p_{3,1} = \frac{0.3}{0.4} = 0.75 \, , p_{3,4} = \frac{0.1}{0.4} = 0.25$$
So the probability transition matrix is:
$$\begin{bmatrix} 0.2&0.8&0&0\\0.5&0&0.5&0\\0.75&0&0&0.25\\1&0&0&0\end{bmatrix}$$
Is this correct? I fail to see why $$p_{2,3}$$ uses the probability of failure in the last year.
• The matrix is right. It's Bayes' theorem. – Oolong milk tea May 11 '19 at 11:50
• Does the second row represent the transition probabilities from $S_1$ to $S_0$, $S_1$,..,$S_3$, respectively? – John Douma May 11 '19 at 12:01
• @JohnDouma Yes, it does. – Jiří Pešík May 11 '19 at 12:04
• Then shouldn't those $0.5$s be replaced by $0.4$ and $0.6$? – John Douma May 11 '19 at 12:05
• @JohnDouma The numbers are results of $p_{2,1} = 0.5$ and $p_{2,3} = 0.5$ given above the matrix. The point is I am not sure if they're right or not. – Jiří Pešík May 11 '19 at 12:08 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211604938803,
"lm_q1q2_score": 0.8570108752347849,
"lm_q2_score": 0.8791467706759583,
"openwebmath_perplexity": 506.7833399709539,
"openwebmath_score": 0.9645945429801941,
"tags": null,
"url": "https://math.stackexchange.com/questions/3221999/probability-of-failure-of-a-light-bulb-in-years"
} |
The probabilities known, summing to $$1$$, are the probability at birth of failing during the 1st, 2nd, 3d or 4th year. Upon failure, the bulb is replaced with a new one, which thus has the same probabilities as above.
We have therefore the following scheme.
So the probability (at birth) $$P_2$$ to fail in the 2nd year (not before, and not after) will be given by the probability to survive for the first year times the probability $$p_2$$ to fail exactly in the 2nd year (given that it survived the first). And analogously for the others, i.e. \eqalign{ & p_{\,1} = 0.2 \cr & \left( {1 - p_{\,1} } \right)p_{\,2} = 0.8 \cdot p_{\,2} = P_{\,2} = 0.4\quad \Rightarrow \quad p_{\,2} = 0.5 \cr & \left( {1 - p_{\,1} } \right)\left( {1 - p_{\,2} } \right)p_{\,3} = P_{\,3} \quad \quad \Rightarrow \quad p_{\,3} = 0.75 \cr & \left( {1 - p_{\,1} } \right)\left( {1 - p_{\,2} } \right)\left( {1 - p_{\,3} } \right)p_{\,4} = P_{\,4} \quad \Rightarrow \quad p_{\,4} = 1 \cr}
And $$p_k,(1-p_k)$$ are the entries of the matrix.
The key lies in interpreting the phrase “the probabilities of failure during the 4 year period” (was the original in English?). It’s pretty unlikely that these numbers represent the transition probabilities that you’re trying to construct. It would be a rather miraculous light bulb that gets more reliable the longer it’s in service. The numbers sum to $$1$$ and we know that the bulb only lasts four years maximum, so the likelier reading is that these numbers are the probabilities that when the bulb fails, it is in its $$k$$th year of service. That is, these four numbers are the probabilities that is it year $$k$$ or service given that the bulb has been found to have failed. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211604938803,
"lm_q1q2_score": 0.8570108752347849,
"lm_q2_score": 0.8791467706759583,
"openwebmath_perplexity": 506.7833399709539,
"openwebmath_score": 0.9645945429801941,
"tags": null,
"url": "https://math.stackexchange.com/questions/3221999/probability-of-failure-of-a-light-bulb-in-years"
} |
This reading is borne out by the computations in the course materials, which are applications of Bayes’ theorem (as pointed out by Oolong milk tea). For the transition matrix, you need the probabilities that the bulb is found to have failed given that it is year $$k$$ of service, which is just the sort of thing that Bayes’ theorem allows you to compute from the given data.
So, for example, the denominator of $$p_{21}=0.8$$ is the probability that the light bulb is one year old, i.e., the probability that it didn’t fail in its first year of service, which is simply $$p_{12}=0.8$$. The numerator is the probability of the bulb’s being a year old given that it failed the test, which is $$0.4$$ from the given probabilities, multipled by $$1$$ since it did fail the test. The computation of $$p_{23}$$ is a bit odd. It looks like the authors used $$\Pr(\text{test fails in third year} \cup \text{test fails in fourth year}) = 0.3+0.1$$ for the probability that the bulb passes when tested during the second year. One could’ve simply set $$p_{23}=1-p_{21}$$ since there are only two possible transitions from $$S_1$$. Fortunately, the two values agree.
• The original article was not in English, however, it was described very briefly. Your interpretation makes sense to me. – Jiří Pešík May 12 '19 at 20:22 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211604938803,
"lm_q1q2_score": 0.8570108752347849,
"lm_q2_score": 0.8791467706759583,
"openwebmath_perplexity": 506.7833399709539,
"openwebmath_score": 0.9645945429801941,
"tags": null,
"url": "https://math.stackexchange.com/questions/3221999/probability-of-failure-of-a-light-bulb-in-years"
} |
# Solving Systems of Linear Differential Equations by Elimination
For a homework problem, we are provided:
$\frac{dx}{dt}=-y + t$
$\frac{dy}{dt}=x-t$
Putting these into differential operator notation and separating the dependent variables from the independent:
$Dx-y=t$
$Dy-x=-t$
My first inclination is to apply the D operator to the second equation to eliminate Dx and get:
$D^2y+y=t-1$
I solve the homogenous part and end up with $y_c=C_1\cos(t) + C_2\sin(t).$
Using annihilator approach and method of undetermined coefficients, I determine that $y_p=t-1$.
General solution for $y(t) = C_1\cos(t)+C_2\sin(t)+t-1$.
After plugging $y$ into the second equation, I get $x(t)=-C_1\sin(t)+C_2\cos(t)+1+t$
Checking my answer against the back of the book, they show: $x(t) = C_1\cos(t)+C_2\sin(t)+t+1$ and $y(t)=C_1\sin(t)-C_2\cos(t)+t-1$
I can't seem to find what I did wrong. Chegg solutions shows to eliminate y instead of x, and got the book's solution. Does the variable chosen for elimination matter? Halp!
• Oops, you're correct. I interchanged the x(t) and y(t) from the back of the book. Will fix now. – Irongrave Mar 16 '15 at 0:37
• Both your solution and the book solution are correct and coincide up to renaming resp. replacing the constants. – Dr. Lutz Lehmann Mar 16 '15 at 3:16
• Look here. – Kw08 Jan 22 '17 at 1:32
Are you sure the original system is written as it is in the book? (Problem was updated to correct $x(t), y(t))$.
I get $$x'' +x = t + 1 \implies x(t) = c_1 \cos t + c_2 \sin t + t + 1$$ and $$y'' + y = t - 1 \implies y(t) = c_1 \sin t + c_2 \cos t + t - 1.$$
Note: this can be written as $y(t) = c_1 \sin t - c_2 \cos t + t - 1$ because $c_2$ is an arbitrary constant. Showing the negative removes the confusion when plugging back in so the authors decided to show it as part of the solution.
You can easily verify this solution by plugging it back into the original system.
Update Lets do the first in more detail. We have:
$$x'' +x = t + 1$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211604938802,
"lm_q1q2_score": 0.8570108706047319,
"lm_q2_score": 0.8791467659263148,
"openwebmath_perplexity": 388.29875462359126,
"openwebmath_score": 0.9066839218139648,
"tags": null,
"url": "https://math.stackexchange.com/questions/1191651/solving-systems-of-linear-differential-equations-by-elimination"
} |
Update Lets do the first in more detail. We have:
$$x'' +x = t + 1$$
To solve the homogeneous, we have $m^2 + 1 = 0 \implies m_{1,2} = \pm ~ i$, yielding:
$$x_h(t) = c_1 \cos t + c_2 \sin t$$
For the particular, we can choose $x_p = a + b t$, and substituting back into the DEQ, yields:
$$x'' + x = a + bt = 1 + t \implies a = b = 1$$
This produces:
$$x(t) = x_h(t) + x_p(t) = c_1 \cos t + c_2 \sin t + t + 1$$
• Thanks, I just fixed that. I am still having difficulty understanding where I went wrong. – Irongrave Mar 16 '15 at 0:39
• I will add details on the $x(t)$ calculation. – Amzoti Mar 16 '15 at 0:41
• Thanks. I get that that works if you choose to eliminate y, but why doesn't eliminating x work the same way? – Irongrave Mar 16 '15 at 0:46
• It does, in my solution I absorb the negative they show into the constant. Clear? – Amzoti Mar 16 '15 at 0:47
• @Irongrave: Of course do the signs and the order of the constants matter, since the two solution components are connected by them. However, it is quite permissible to do a linear transformation of the constants by replacing $C_1=D_2$ and $C_2=-D_1$ to go from the book solution with constants $(C_1,C_2)$ to your solution form with renamed constants $(D_1,D_2)$. – Dr. Lutz Lehmann Mar 16 '15 at 3:14
You could of course also see the complex differential equation $$\dot z=i·z+(1-i)·t$$ in this system.
Its homogeneous solution is $z=c·e^{it}$. The inhomogeneous solution can be constructed as usual per linear ansatz $z_p=a+b·t$ leading to $$b=i·a+i·b·t+(1-i)·t$$ and thus $b=1+i$ and $a=1-i$ and the full solution $$z=c·e^{it}+(1-i)·(1+i·t)$$ Separating into real and imaginary part gives the solution of the original system. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211604938802,
"lm_q1q2_score": 0.8570108706047319,
"lm_q2_score": 0.8791467659263148,
"openwebmath_perplexity": 388.29875462359126,
"openwebmath_score": 0.9066839218139648,
"tags": null,
"url": "https://math.stackexchange.com/questions/1191651/solving-systems-of-linear-differential-equations-by-elimination"
} |
# Strategy to calculate $\frac{d}{dx} \left(\frac{x^2-6x-9}{2x^2(x+3)^2}\right)$.
I am asked to calculate the following: $$\frac{d}{dx} \left(\frac{x^2-6x-9}{2x^2(x+3)^2}\right).$$ I simplify this a little bit, by moving the constant multiplicator out of the derivative: $$\left(\frac{1}{2}\right) \frac{d}{dx} \left(\frac{x^2-6x-9}{x^2(x+3)^2}\right)$$ But, using the quotient-rule, the resulting expressions really get unwieldy: $$\frac{1}{2} \frac{(2x-6)(x^2(x+3)^2) -(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2}$$
I came up with two approaches (3 maybe):
1. Split the terms up like this: $$\frac{1}{2}\left( \frac{(2x-6)(x^2(x+3)^2)}{(x^2(x+3)^2)^2} - \frac{(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2} \right)$$ so that I can simplify the left term to $$\frac{2x-6}{x^2(x+3)^2}.$$ Taking this approach the right term still doesn't simplify nicely, and I struggle to combine the two terms into one fraction at the end.
2. The brute-force-method. Just expand all the expressions in numerator and denominator, and add/subtract monomials of the same order. This definitely works, but i feel like a stupid robot doing this.
3. The unofficial third-method. Grab a calculator, or computer-algebra-program and let it do the hard work.
Is there any strategy apart from my mentioned ones? Am I missing something in my first approach which would make the process go more smoothly? I am looking for general tips to tackle polynomial fractions such as this one, not a plain answer to this specific problem. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211582993982,
"lm_q1q2_score": 0.8570108625020557,
"lm_q2_score": 0.8791467595934563,
"openwebmath_perplexity": 383.7864381263352,
"openwebmath_score": 0.8510565161705017,
"tags": null,
"url": "https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238"
} |
• Frankly, I think that we waste a lot of time worrying about simplifying things like this. Try a couple of things. If you can't get something nice relatively quickly, throw a CAS at it and get on with your life. In the real world, you will encounter very few situations where you want to factor a polynomial and are also able to do so (e.g. almost every polynomial of degree $5$ or higher cannot be factored; in many real-world settings, the polynomials come pre-factored; etc). – Xander Henderson Jun 6 '20 at 2:51
• In examples like this (that is, if you want to differentiate a rational function), you might save some time using logarithmic differentiation. – Xander Henderson Jun 6 '20 at 2:52
• @XanderHenderson I agree that in modern times one should focus more on understanding and applying concepts, than on pure computation. However, in this case i am happy to have been pointed to logarithmic differentiation, partial fractions and polynomial long division which made me realize some knowledge-gaps i was unaware of, which would not have happened had i used a CAS. – LeonTheProfessional Jun 6 '20 at 7:16
Logarithmic differentiation can also be used to avoid long quotient rules. Take the natural logarithm of both sides of the equation then differentiate: $$\frac{y'}{y}=2\left(\frac{1}{x-3}-\frac{1}{x}-\frac{1}{x+3}\right)$$ $$\frac{y'}{y}=-\frac{2\left(x^2-6x-9\right)}{x(x+3)(x-3)}$$ Then multiply both sides by $$y$$: $$y'=-\frac{{\left(x-3\right)}^3}{x^3{\left(x+3\right)}^3}$$
• Great!! Yet another approach! I will look into logarithmic differentiation a bit more. With these tools i don't need to feel like a stupid robot anymore ;) – LeonTheProfessional Jun 5 '20 at 17:41
• Yep. You should logarithmic differentiation because it's more applicable than the other methods previously mentioned. Especially if there are square root, cube root, etc. of polynomials in the numerator/denominator. – Ty. Jun 5 '20 at 17:43
HINT | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211582993982,
"lm_q1q2_score": 0.8570108625020557,
"lm_q2_score": 0.8791467595934563,
"openwebmath_perplexity": 383.7864381263352,
"openwebmath_score": 0.8510565161705017,
"tags": null,
"url": "https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238"
} |
HINT
To begin with, notice that \begin{align*} x^{2} - 6x - 9 = 2x^{2} - (x^{2} + 6x + 9) = 2x^{2} - (x+3)^{2} \end{align*} Thus it results that \begin{align*} \frac{x^{2} - 6x - 9}{2x^{2}(x+3)^{2}} = \frac{2x^{2} - (x+3)^{2}}{2x^{2}(x+3)^{2}} = \frac{1}{(x+3)^{2}} - \frac{1}{2x^{2}} \end{align*}
In the general case, polynomial long division and the partial fraction method would suffice to solve this kind of problem.
• Thank you! I think i stared too long at this problem, and have to take a step back to notice these patterns. I will examine the problem again (tomorrow) with your hints, and see if any more questions arise. If so, i will comment here. If no other answers of overwhelming enlightment appear, i will mark this answer as the accepted one. :) – LeonTheProfessional Jun 5 '20 at 17:35
Note that $$x^2-6x-9 = (x-3)^2 - 18$$. So after pulling out the factor of $$\frac 12$$, it suffices to compute $$\frac{d}{dx} \left(\frac{x-3}{x(x+3)}\right)^2$$ and $$\frac{d}{dx} \left(\frac{1}{x(x+3)}\right)^2.$$ These obviously only require finding the derivative of what's inside, since the derivative of $$(f(x))^2$$ is $$2f(x)f'(x)$$.
For a final simplification, note that $$\frac{1}{x(x+3)} = \frac{1}{3} \left(\frac 1x - \frac{1}{x+3}\right),$$ so you'll only ever need to take derivatives of $$\frac 1x$$ and $$\frac {1}{x+3}$$ to finish, since the $$x-3$$ in the numerator of the first fraction will simplify with these to give an integer plus multiples of these terms.
As a general rule, partial fractions will greatly simplify the work required in similar problems. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211582993982,
"lm_q1q2_score": 0.8570108625020557,
"lm_q2_score": 0.8791467595934563,
"openwebmath_perplexity": 383.7864381263352,
"openwebmath_score": 0.8510565161705017,
"tags": null,
"url": "https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238"
} |
As a general rule, partial fractions will greatly simplify the work required in similar problems.
• I found this partial fraction using polynomial long division, yet it didn't appear to me, that any remainder appearing there would obviously disappear when taking the derivative. This really is a great hint! – LeonTheProfessional Jun 5 '20 at 17:39
• @hdighfan I think there is a slight typo - it should read that the derivative of $\left(f(x)\right)^2$ is $2f(x)f'(x)$. – Zubin Mukerjee Jun 6 '20 at 3:01
• I have edited it to fix, please revert if not wanted – Zubin Mukerjee Jun 6 '20 at 20:18
• Ah, of course. Thanks for the edit. – hdighfan Jun 6 '20 at 20:19 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211582993982,
"lm_q1q2_score": 0.8570108625020557,
"lm_q2_score": 0.8791467595934563,
"openwebmath_perplexity": 383.7864381263352,
"openwebmath_score": 0.8510565161705017,
"tags": null,
"url": "https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238"
} |
# Showing a function is not uniformly continuous
I am looking at uniform continuity (for my exam) at the moment and I'm fine with showing that a function is uniformly continuous but I'm having a bit more trouble showing that it is not uniformly continuous, for example:
show that $x^4$ is not uniformly continuous on $\mathbb{R}$, so my solution would be something like:
Assume that it is uniformly continuous then:
$$\forall\epsilon\geq0\exists\delta>0:\forall{x,y}\in\mathbb{R}\ \mbox{if}\ |x-y|<\delta \mbox{then} |x^4-y^4|<\epsilon$$
Take $x=\frac{\delta}{2}+\frac{1}{\delta}$ and $y=\frac{1}{\delta}$ then we have that $|x-y|=|\frac{\delta}{2}+\frac{1}{\delta}-\frac{1}{\delta}|=|\frac{\delta}{2}|<\delta$ however $$|f(x)-f(y)|=|\frac{\delta^3}{8}+3\frac{\delta}{4}+\frac{3}{2\delta}|$$
Now if $\delta\leq 1$ then $|f(x)-f(y)|>\frac{3}{4}$ and if $\delta\geq 1$ then $|f(x)-f(y)|>\frac{3}{4}$ so there exists not $\delta$ for $\epsilon < \frac{3}{4}$ and we have a contradiction.
So I was wondering if this was ok (I think it's fine) but also if this was the general way to go about showing that some function is not uniformly continuous? Or if there was any other ways of doing this that are not from the definition?
Thanks very much for any help
-
So, is this an exam question? – user21436 Apr 22 '12 at 12:17
No, I'm just practicing for my exam where questions like this (not this one though) will come it. This is from one of the past papers that are for revision – hmmmm Apr 22 '12 at 12:29
Just trying to ensure we are not taken for a ride. Hope you don't mind. :) – user21436 Apr 22 '12 at 12:33
@KannappanSampath no its fine- it annoys me when I see people posting assessment questions on forums :) – hmmmm Apr 22 '12 at 12:36
To show that it is not uniformly continuous on the whole line, there are two usual (and similar) ways to do it: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692284751636,
"lm_q1q2_score": 0.8570045817287523,
"lm_q2_score": 0.8774767922879693,
"openwebmath_perplexity": 266.53601893974314,
"openwebmath_score": 0.9927499890327454,
"tags": null,
"url": "http://math.stackexchange.com/questions/135234/showing-a-function-is-not-uniformly-continuous"
} |
1. Show that for every $\delta > 0$ there exist $x$ and $y$ such that $|x-y|<\delta$ and $|f(x)-f(y)|$ is greater than some positive constant (usually this is even arbitrarily large).
2. Fix the $\varepsilon$ and show that for $|f(x)-f(y)|<\varepsilon$ we need $\delta = 0$.
First way:
Fix $\delta > 0$, set $y = x+\delta$ and check $$\lim_{x\to\infty}|x^4 - (x+\delta)^4| = \lim_{x\to\infty} 4x^3\delta + o(x^3) = +\infty.$$
Second way:
Fix $\epsilon > 0$, thus $$|x^4-y^4| < \epsilon$$ $$|(x-y)(x+y)(x^2+y^2)| < \epsilon$$ $$|x-y|\cdot|x+y|\cdot|x^2+y^2| < \epsilon$$ $$|x-y| < \frac{\epsilon}{|x+y|\cdot|x^2+y^2|}$$
but this describes a necessary condition, so $\delta$ has to be at least as small as the right side, i.e.
$$|x-y| < \delta \leq \frac{\epsilon}{|x+y|\cdot|x^2+y^2|}$$
so if either of $x$ or $y$ tends to infinity then $\delta$ tends to $0$.
Hope that helps ;-)
Edit: after explanation and calculation fixes, I don't disagree with your proof.
-
Thanks for the reply, I think that I use that we are considering all of $\mathbb{R}$ when i choose $x=\delta+\frac{1}{\delta}$ and $y=\frac{1}{\delta}$ as these would not be valid for small $\delta$ in bounded interval? – hmmmm Apr 22 '12 at 13:38
@hmmmm Ok, I misunderstood what you were saying there. If the calculations are alright, then your proof is fine. – dtldarek Apr 22 '12 at 13:44
I will comment on your solution after writing another approach. For any $x,y\in\mathbb{R}$ we have: \begin{align*} |x^{4}-y^{4}|=|(x^{2}-y^{2})(x^{2}+y^{2})|=|(x-y)(x+y)(x^{2}+y^{2})|=|x-y|\cdot |x+y|\cdot |x^{2}+y^{2}| \end{align*}
So what you can see is that even if you take arbitrarily close $x$ and $y$, you can grow the distance of $x^{4}$ and $y^{4}$ as much as you want by taking them far enough away from zero. You can easily conclude from here that the function is not uniformly continuous by a contraposition for example. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692284751636,
"lm_q1q2_score": 0.8570045817287523,
"lm_q2_score": 0.8774767922879693,
"openwebmath_perplexity": 266.53601893974314,
"openwebmath_score": 0.9927499890327454,
"tags": null,
"url": "http://math.stackexchange.com/questions/135234/showing-a-function-is-not-uniformly-continuous"
} |
Alright, then to your solution. If the calculations would be correct, then it would be fine. You could assume at first that such $\delta>0$ exists for $0<\varepsilon<3$ and conclude with a contradiction. However, I got a bit different calculations than you. Using the above equation we see that: \begin{align*} |f(\frac{\delta}{2}+\frac{1}{\delta})-f(\frac{1}{\delta})|&=|(\frac{\delta}{2}+\frac{1}{\delta})^{4}-\frac{1}{\delta^{4}}|=|\frac{\delta}{2}(\frac{\delta}{2}+\frac{2}{\delta})((\frac{\delta}{2}+\frac{1}{\delta})^{2}+\frac{1}{\delta^{2}})| \\ &= |(\frac{\delta^{2}}{4}+1)(\frac{\delta^{2}}{4}+2\cdot \frac{\delta}{2}\cdot \frac{1}{\delta}+\frac{1}{\delta^{2}}+\frac{1}{\delta^{2}})| \\ &=|(\frac{\delta^{2}}{4}+1)(\frac{\delta^{2}}{4}+1+\frac{2}{\delta^{2}})| \\ &= |\frac{\delta^{4}}{16}+\frac{\delta^{2}}{4}+\frac{1}{2}+\frac{\delta^{2}}{4}+1+\frac{2}{\delta^{2}}| \\ &= |\frac{\delta^{4}}{16}+\frac{\delta^{2}}{2}+\frac{2}{\delta^{2}}+\frac{3}{2}|\\ &= \frac{\delta^{4}}{16}+\frac{\delta^{2}}{2}+\frac{2}{\delta^{2}}+\frac{3}{2} \end{align*} If you're able to find a lower bound for this (which is quite easy) as you did previously, then by choosing an epsilon smaller than that fixed number you may conclude as you did in your original post by contradiction. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692284751636,
"lm_q1q2_score": 0.8570045817287523,
"lm_q2_score": 0.8774767922879693,
"openwebmath_perplexity": 266.53601893974314,
"openwebmath_score": 0.9927499890327454,
"tags": null,
"url": "http://math.stackexchange.com/questions/135234/showing-a-function-is-not-uniformly-continuous"
} |
-
Hey sorry about that, I edited it-hopefully it's right now? – hmmmm Apr 22 '12 at 13:39
I also edited now my calculation which still differs abit from your new one. Could you show the steps of how you got this answer for $|f(x)-f(y)|$? – Thomas E. Apr 22 '12 at 13:47
yeah I messed that up quite a bit sorry (I had the wrong power and the wrong delta's) – hmmmm Apr 22 '12 at 13:50
It should be $|(\frac{\delta}{2}+\frac{1}{\delta})^4-\frac{1}{\delta^4}|$ which would give $|\frac{\delta^4}{16}+\frac{\delta^2}{2}+\frac{2}{\delta}+\frac{3}{2}|$ I think I could conclude a similar thing from here? – hmmmm Apr 22 '12 at 13:52
Except that is the last $-\frac{1}{\delta^{4}}$ missing from there? Otherwise it looks close to mine. – Thomas E. Apr 22 '12 at 13:57
I think you should make this a little simpler (and for uniform continuity in general) All you need to do to show $f:X \to Y$ is not uniformly continuous on $X$ (let's suppose there both subsets of $\Bbb R$), then just give me a SINGLE epsilon such that, NO MATTER HOW SMALL delta is chosen, there will be x and y closer than delta for which the difference in function values exceed epsilon. Thus for instance $|(N+\theta)^4- N^4| \ge 4\theta N^3$ so if you choose $x$ really big (with respect to $\delta, x=N+\theta\,\,\,\ \text{and}\,\,\, y = N,$ then if $0 < \delta/2 < \theta < \delta$ you have $|x-y| < \delta$ yet you still have the variable N to play with to make the difference in function values as large as you like (in particular the difference in function values can always be made bigger than 3 regardless of how small $\delta$ is). Nevertheless, I think your proof is an accurate job! | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692284751636,
"lm_q1q2_score": 0.8570045817287523,
"lm_q2_score": 0.8774767922879693,
"openwebmath_perplexity": 266.53601893974314,
"openwebmath_score": 0.9927499890327454,
"tags": null,
"url": "http://math.stackexchange.com/questions/135234/showing-a-function-is-not-uniformly-continuous"
} |
# TrigonometryMore than one equation for a given Trig Graph?
#### m3dicat3d
##### New member
Hi all.. another Trig question here...
Let's say I'm given a graph of a sinusoidal function and asked to find its equation, but I'm not told whether this is a sine or cosine function and I'm left to determine that myself.
I understand that evaluating where the graph intersects the y axis is the straight-forward, easiest approach. For instance, take this graph where the y interval is .5 and the x interval is pi/2
I can say that it's a sine graph easily by sight, but also b/c it intersects the y axis at y=0. And given the phase shift and no vertical shift, the equation is f(x) = sin [(2/3)x].
BUT, couldn't this also be f(x) = cos [{(2/3)x} - (pi/2)] since sin(x) and cos(x) are seperated only by a phase shift of pi/2?
This is meant for my own edification and not to make this kind of exercise more confusing than be. I'm simply interseted if this is in fact mathematically accurate that you could have more than one equation (a sine or a cosine "version") for a given sinusoidal curve.
My calculator returns coincidental curves when I graph both the sine and cosine "versions" of this given graph, but I know my calculator isn't really a mathematician either haha, so I thought I'd ask some real mathematicians instead
Thanks
#### Ackbach
##### Indicium Physicus
Staff member
Note that by the addition of angles identity, that
$$\cos(2x/3- \pi/2)= \cos(2x/3) \cos( \pi/2)+ \sin(2x/3) \sin( \pi/2) = \cos(2x/3) \cdot 0+ \sin(2x/3) \cdot 1= \sin(2x/3).$$
So yes, you can definitely have more than one representation of the same graph, as you have seen on your calculator.
#### m3dicat3d
##### New member
Thank You!!! Thank You!!! Thank You!!! Thank You!!! Thank You!!!
Excellent answer, and as I am still reviewing my Trig, I hadn't even considered the identity perspective of it... I can't stress enough how much having that perspective helps me even more with this... | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9863631659211718,
"lm_q1q2_score": 0.8569787346726185,
"lm_q2_score": 0.8688267813328976,
"openwebmath_perplexity": 1098.1628063593155,
"openwebmath_score": 0.8644052147865295,
"tags": null,
"url": "https://mathhelpboards.com/threads/more-than-one-equation-for-a-given-trig-graph.3641/"
} |
Again, Thank you... man this place rocks!
#### MarkFL
Staff member
As sine and cosine are complementary or co-functions, this just means they are out of phase by $\displaystyle \frac{\pi}{2}$ radians, or 90°.
You may have noticed that a sine curve, if moved 1/4 period to the left, becomes a cosine curve, or conversely, a cosine curve moved 1/4 period to the right becomes a sine curve.
You are doing well to see this, it shows you are trying to understand it on a deeper level rather than just plugging into formulas. Both the sine function and the cosine function, and linear combinations of the two (with equal amplitudes) are called sinusoidal functions.
#### m3dicat3d
##### New member
Thanks MarkFL, I appreciate the encouraging words. I'm studying for my State certification exam to teach HS math here in TX, and I tutor HS students in the meantime. I'm no math genius by far, so when I ask some of my questions I sometimes feel they might be dumb (which no one here has made me feel like I'm glad to say). I'm trying to see those "nuances" in the math in case they might help my students, and again, I appreciate your words, and the full out decency of the community here. It's a great place to learn
#### Ackbach
##### Indicium Physicus
Staff member
Thank You!!! Thank You!!! Thank You!!! Thank You!!! Thank You!!!
Excellent answer, and as I am still reviewing my Trig, I hadn't even considered the identity perspective of it... I can't stress enough how much having that perspective helps me even more with this...
Again, Thank you... man this place rocks!
You're quite welcome! Glad to be of help. | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9863631659211718,
"lm_q1q2_score": 0.8569787346726185,
"lm_q2_score": 0.8688267813328976,
"openwebmath_perplexity": 1098.1628063593155,
"openwebmath_score": 0.8644052147865295,
"tags": null,
"url": "https://mathhelpboards.com/threads/more-than-one-equation-for-a-given-trig-graph.3641/"
} |
# Thread: Roots of unity and the length from one root to the other roots
1. ## Roots of unity and the length from one root to the other roots
Hello,
I had an assignment that required me to solve for the roots of unity for various equations of the form $z^n -1 = 0$. Then , I was asked to represent the roots of unity for each equation on an argand diagram in the form of a regular polygon.
I did all of that , however, i have a question:
is there a relation ship between the power n and the length from one root to the other roots?
When n = 3,
The roots are $-0.5 \pm 0.8660i$ and 1. The length from one root to the other roots are both 1.7321 (sqrt of 3)
When n = 4,
The roots are $\pm 1, 0 \pm 1i$. The length from one root to the other roots are (sqrt 2, sqrt and 2)
I am wondering , for the equation $z^3 - 1 = 0$ , is there a formula relating the power of z to the length from one roots to the other roots?
Thanks.
2. ## Re: Roots of unity and the length from one root to the other roots
The nth roots of unity lie at the vertices of a polygon with n sides, each vertex having distance from the center of 1. Drawing a line from the center to each vertex divides the polygon into n isosceles triangles, each having two sides of length 1, vertex angle of 360/n degrees and you want to find the length of the third side. If you draw a line from the vertex of such a triangle to the center of the base, you get two right triangles with hypotenuse length 1 and one angle of 180/n. The opposite side of that triangle has length 1(sin(180)/n) so the side of the polygon is twice that :
2sin(180/n).
3. ## Re: Roots of unity and the length from one root to the other roots
Hi, thank you very much for answering the question first.
However, i have already got the conjecture that the length of the polygon would be 2 sin (180/n), but this conjecture cannot be proven algebraically or MI that this statement would be true for any value of n, can it? | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363499098283,
"lm_q1q2_score": 0.8569605489935974,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 184.76773029695687,
"openwebmath_score": 0.756908118724823,
"tags": null,
"url": "http://mathhelpforum.com/pre-calculus/214181-roots-unity-length-one-root-other-roots.html"
} |
I am just wondering is there any other relationships between the length from one root to the other roots? (Lets say, if you draw a line from one root (any root) to the other roots (not only the adjacent root)
n = 3, length is sqrt 3 and sqrt 3
n = 4, length is sqrt 2 , 2 and sqrt 2
n = 5, length is not an exact value.. but what i have got is 1.1756, 1.9021, 1.9021 and 1.1756
n = 6, length is 1, sqrt 3, 2, sqrt 3, 1
n = 7, length is 0.86, 1.56, 1.94, 1.94, 1.56, 0.86
seems like theres a pattern here, but i can't seem to figure it out... would help a lot if someone could tell me if theres a conjecture for this or not, thank you very much. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363499098283,
"lm_q1q2_score": 0.8569605489935974,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 184.76773029695687,
"openwebmath_score": 0.756908118724823,
"tags": null,
"url": "http://mathhelpforum.com/pre-calculus/214181-roots-unity-length-one-root-other-roots.html"
} |
+0
# help
0
62
4
Which is larger, the blue area or the orange area?
Nov 14, 2019
#1
+105411
+2
Let the radius of the orange circle = r
So......its area = pi * r^2
We can find the side length,s, of the equilateral triangle thusly
tan (30°) = r / [(1/2)s]
1/ √3 = r / [ (1/2)s ]
(1/ √3) ( 1/2)s = r
s = (2√3) r = [ √12 ] r
And the area of the equilateral triangle is ( √3/ 4 ) ([√12]r)^2 = 3√3 r^2
So....the blue area inside the equilateral triangle =
[area of equailateral triangle - area of small circle ] / 3
[√3 r^2 - (1/3) pi r^2 ] = r^2 [ √3 - pi/3] (1)
The radius of the larger circle can be found as
√[ [√3r ]^2 + r^2 ] = √ [ 3r^2 + r^2 ] = 2r
So....the area of the larger circle = pi (2r)^2 = 4pi r^2
So the area between the side of the equilateral triangle and the larger circle is
[Area of larger circle - area of equilateral triangle ] / 3 =
[ 4pi r^2 - 3√3r^2 ] / 3 = r^2 [ (4/3)pi - √3] (2)
So the sum of (1) and (2) is the sum of the blue areas
r^2 [ √3 - pi/3 + (4/3) pi - √3 ] = pi r^2
So.....the orange and blue areas are equal !!!!
Nov 14, 2019
#2
+70
+2
Here is a different approach.The arc AC has measure $$\frac{1}{3}\cdot2\pi$$ and the angle AOB has measure half of the arc or $$\frac {\pi}{3}$$. If the radius of the larger circle is $$R$$, then the measure of OB, the radius of the smaller circle, is $$Rcos(\frac{\pi}{3})$$and so the orange region has area $$\pi(Rcos(\frac{\pi}{3}))^2=\frac{1}{4}\pi R^2$$ . The blue region, however, has area equal to $$\frac{1}{3}$$of the difference between area of the larger circle and the area of the smaller circle, i.e. $$\frac{1}{3}(\pi R^2-\frac{1}{4}\pi R^2) =\frac{1}{4}\pi R^2$$. So the two regions have the same area, each being $$\frac{1}{4}$$ of the area of the larger circle.
Nov 14, 2019
#3
+105411
+1
Thanks, Gadfly....I like that approach !!!
Nov 14, 2019
#4
+23575
+2 | {
"domain": "0calc.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363480718236,
"lm_q1q2_score": 0.8569605473934355,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 10881.816147161699,
"openwebmath_score": 0.945934534072876,
"tags": null,
"url": "https://web2.0calc.com/questions/help_4799"
} |
Nov 14, 2019
#3
+105411
+1
Thanks, Gadfly....I like that approach !!!
Nov 14, 2019
#4
+23575
+2
Which is larger, the blue area or the orange area?
$$\begin{array}{|rcll|} \hline {\color{orange}\text{orange}} +3\times {\color{blue}\text{blue}} &=& \pi r_{\text{circumcircle}}^2 \quad | \quad : {\color{orange}\text{orange}} \\ 1+3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{ \pi r_{\text{circumcircle}}^2 } { {\color{orange}\text{orange}} } \quad | \quad {\color{orange}\text{orange}} = \pi r_{\text{incircle}}^2 \\ 1+3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{ \pi r_{\text{circumcircle}}^2 } { \pi r_{\text{incircle}}^2 } \\ 1+3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \left( \dfrac{ r_{\text{circumcircle}} } { r_{\text{incircle}} } \right)^2 \\ 3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \left( \dfrac{ r_{\text{circumcircle}} } { r_{\text{incircle}} } \right)^2 -1 \\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\left( \left( \dfrac{ r_{\text{circumcircle}} } { r_{\text{incircle}} } \right)^2 -1 \right) \\ \\ && \boxed{\text{here, if triangle is equilateral: }\\ \mathbf{2\times r_{\text{incircle}} = r_{\text{circumcircle}}!!!} } \\\\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\left( \left( \dfrac{ 2\times r_{\text{incircle}} } { r_{\text{incircle}} } \right)^2 -1 \right)\\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\times(2^2 -1) \\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\times(3) \\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& 1 \\ \mathbf{ {\color{blue}\text{blue}} } &=& \mathbf{ {\color{orange}\text{orange}} } \\ \hline \end{array}$$
Nov 15, 2019
edited by heureka Nov 15, 2019 | {
"domain": "0calc.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363480718236,
"lm_q1q2_score": 0.8569605473934355,
"lm_q2_score": 0.8705972801594706,
"openwebmath_perplexity": 10881.816147161699,
"openwebmath_score": 0.945934534072876,
"tags": null,
"url": "https://web2.0calc.com/questions/help_4799"
} |
A while ago @davidphys1 asked why nobody had made animations of the shunting yard algorithm with cutesy trains.
There is no surer way to summon me!
I've spent some of my spare time over the bank holidays making exactly that: somethingorotherwhatever.com/s
· · Web · · ·
The shunting yard algorithm neatly solves the problem of translating a mathematical expression written in infix notation (operators go between the numbers/letters) to postfix notation (operators go after the things they act on).
The core problem is that you need to work out what just what an operator applies to: with the order of operations, it might be just one number, or it might a large sub-expression.
The algorithm solves this by holding operators on a separate stack until they're needed
Here are some animations to illustrate. In the first, the operations happen left-to-right, so they appear in the same order in the output as in the input.
In the second, the addition must happen after the multiplication, so it's held back.
In the third, brackets ensure that the addition happens first.
The last wrinkle for standard arithmetic is that exponentiation is right-associative: while for the other operations you work left-to-right:
1 − 2 − 3 = (1 − 2) − 3,
the order goes the other way for exponentiation:
1 ^ 2 ^ 3 = 1 ^ (2 ^ 3)
@christianp @davidphys1 Wow that's a very cute animation!
@christianp
Doesn't that depend on the domain?
I mean, some rules aren't valid anymore for, say, Quaternions or Octaves…
@RyunoKi *leans heavily on the word "standard"*
@RyunoKi but no, I think that if I wrote an expression $$a \times b \times c$$, where a, b and c are octonions and so multiplication isn't associative, I'd expect you to interpret it as $$a \times b \times c$$, by convention.
I might include some brackets, to avoid relying on convention.
@christianp
So… the algorithm isn't considering this constraint. Fine. Space for further research :)
@RyunoKi no, it is. Or I don't understand what you mean. | {
"domain": "mathstodon.xyz",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363499098283,
"lm_q1q2_score": 0.8569605456887277,
"lm_q2_score": 0.8705972768020107,
"openwebmath_perplexity": 1594.6830523310077,
"openwebmath_score": 0.7036386132240295,
"tags": null,
"url": "https://mathstodon.xyz/@christianp/108424166828836198"
} |
@RyunoKi no, it is. Or I don't understand what you mean.
@RyunoKi just noticed I missed the brackets in my second-last tweet! I had trouble LaTeXing, then didn't check how it looked!
I'd expect you to interpret $$a \times b \times c$$ as $$(a \times b) \times c$$
@christianp
I even switched to browser view to check whether something got lost in transmission 😅
@christianp
„The core problem is that you need to work out what just what an operator applies to: with the order of operations, it might be just one number, or it might a large sub-expression.
The algorithm solves this by holding operators on a separate stack until they're needed“
Is that assuming „standard" arithmetic like associative multiplication? Or the lack thereof?
Like, once „computer does that“ many people rely on it without questioning the result.
@RyunoKi right, the algorithm says that multiplication is left-associative. For real numbers, it doesn't matter.
@christianp
Thanks. Important fact in certain circumstances.
@christianp
Especially the order and arguments can change depending on how you put the brackets.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon! | {
"domain": "mathstodon.xyz",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363499098283,
"lm_q1q2_score": 0.8569605456887277,
"lm_q2_score": 0.8705972768020107,
"openwebmath_perplexity": 1594.6830523310077,
"openwebmath_score": 0.7036386132240295,
"tags": null,
"url": "https://mathstodon.xyz/@christianp/108424166828836198"
} |
# Is $\frac{200!}{(10!)^{20} \cdot 19!}$ an integer or not?
A friend of mine asked me to prove that $$\frac{200!}{(10!)^{20}}$$ is an integer
I used a basic example in which I assumed that there are $200$ objects places in $20$ boxes (which means that effectively there are $10$ objects in one box). One more condition that I adopted was that the boxes are distinguishable but the items within each box are not. Now the number of permutations possible for such an arrangement are : $$\frac{200!}{\underbrace{10! \cdot 10! \cdot 10!\cdots 10!}_{\text{20 times}}}$$ $$\Rightarrow \frac{200!}{(10!) ^{20}}$$
Since these are just ways of arranging, we can be pretty sure that this number is an integer.
Then he made the problem more complex by adding a $19!$ in the denominator, thus making the problem: Is $$\frac{200!}{(10!)^{20} \cdot 19!}$$ an integer or not?
The $19!$ in the denominator seemed to be pretty odd and hence I couldn't find any intuitive way to determine the thing. Can anybody please help me with the problem?
• There might be some clever way, but this is where I would start counting primes. Are there enough $2$'s in the numerator? $3$'s? – Arthur Aug 15 '17 at 7:23
• Use the fact that every set of $k$ integers will have one integer divisible by $k$. Show that that means $(m+1)(m+2).... (m+k)$ will be divisible by $k!$. And that means $(1*2....10)(11*... 20)...... (191*....200)$ is divisible by $10!*10!*....10!$. – fleablood Aug 15 '17 at 7:26
• That's an interesting friend you have there. – uniquesolution Aug 15 '17 at 7:30
• To eliminate 19! Not that all k divide 10*k. Then each 10k+1 to 10k+9 is divisible by 9! And 10k/k = 10 so it is divisible by 10 as well. – fleablood Aug 15 '17 at 7:34
• An interesting question (at least I think it would be) is what is the largest integer $k$ so that $k!*10^{20}$ divides $200!$. – fleablood Aug 15 '17 at 16:27 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363512883316,
"lm_q1q2_score": 0.8569605402791093,
"lm_q2_score": 0.870597270087091,
"openwebmath_perplexity": 416.9148791756454,
"openwebmath_score": 0.780359148979187,
"tags": null,
"url": "https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1"
} |
You assumed the boxes were distinguishable, leading to $\frac{200!}{(10!)^{20}}$, ways to fill the boxes. If you make them indistinguishable, you merge the $20!$ ways of reordering the boxes into one, so that previous answer overcounts each way of filling indistinguishable boxes by a factor of $20!$. Therefore you are left with $\frac{200!}{(10!)^{20}}/20!$ ways to fill 20 indistinguishable boxes, which then must be an integer. After multiplying by $20$ it is of course still an integer.
We know that $\dfrac{(mn)!}{n!(m!)^n}$ is an integer for $m,n \in \Bbb N$ $^{(*)}$ . Let $n = 20$ and $m = 10$, then $\dfrac{(200)!}{20!(10!)^{20}}$ is an integer.
Multiply by $20$, $\dfrac{(200)!}{19!(10!)^{20}}$ is an integer.
Using induction, this answer says that $$\frac{(mn)!}{(m!)^nn!}=\prod_{k=1}^n\binom{mk-1}{m-1}$$ Plug in $m=10$ and $n=20$ to get $$\frac{200!}{10!^{20}\,20!}=\prod_{k=1}^{20}\binom{10k-1}{9}$$ Multiply by $20$ to get $$\frac{200!}{10!^{20}\,19!}=20\,\prod_{k=1}^{20}\binom{10k-1}{9}$$
Another Approach
Note that \begin{align} \binom{kn}{n} &=\frac{(kn-n+1)(kn-n+2)\cdots(kn-1)\,kn}{1\cdot2\cdots(n-1)\,n}\\ &=\frac{(kn-n+1)(kn-n+2)\cdots(kn-1)\,k}{1\cdot2\cdots(n-1)}\\ &=\binom{kn-1}{n-1}\,k \end{align} Therefore, since we can write a multinomial as a product of binomials, \begin{align} \frac{(mn)!}{n!^m} &=\prod_{k=1}^m\binom{kn}{n}\\ &=\prod_{k=1}^m\binom{kn-1}{n-1}\,k\\ &=m!\,\prod_{k=1}^m\binom{kn-1}{n-1} \end{align} and so $$\frac{(mn)!}{n!^m\,m!}=\prod_{k=1}^m\binom{kn-1}{n-1}$$ Plug in $m=20$ and $n=10$ and multiply by $20$ to get $$\frac{200!}{10!^{20}\,19!}=20\,\prod_{k=1}^{20}\binom{10k-1}{9}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363512883316,
"lm_q1q2_score": 0.8569605402791093,
"lm_q2_score": 0.870597270087091,
"openwebmath_perplexity": 416.9148791756454,
"openwebmath_score": 0.780359148979187,
"tags": null,
"url": "https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1"
} |
• But this is my answer ... – user8277998 Aug 16 '17 at 8:49
• @123: I see... I cited an answer that I wrote and didn't see that you had parenthetically cited the question to which that was an answer. However, without the connection given by the citations, the answers do not look the same. If this bothers you, I will delete my answer. – robjohn Aug 16 '17 at 13:01
• @123: I have added another proof of the same identity to differentiate our answers. I will still delete this answer if you think they are too close. – robjohn Aug 16 '17 at 13:28
• No, I have no problem with your answer, you can keep it as it is. – user8277998 Aug 16 '17 at 14:36
A long version: $$\frac{200!}{10!^{20} \cdot 19!}=\frac{30\cdot31\cdot .. \cdot200}{10!^{19}}\cdot \frac{29!}{10!\cdot(29-10)!}=...$$ which is $$...=\frac{30\cdot31\cdot .. \cdot200}{10!^{19}}\cdot \binom{29}{10}=\\ \frac{\color{red}{30} ..\color{red}{40} ..\color{red}{50} ..\color{red}{60}..\color{red}{70}..\color{red}{80}..\color{red}{90}..\color{red}{10^2}..\color{red}{110}..\color{red}{120}..\color{red}{130}..\color{red}{140}..\color{red}{150}..\color{red}{160}..\color{red}{170}..\color{red}{180}..\color{red}{190}..\color{red}{2\cdot10^{2}}}{10!^{19}}\cdot \binom{29}{10}=...$$ $20$ numbers divisible by 10, or $$3\cdot4\cdot5\cdot..\cdot9\cdot11\cdot..\cdot19\cdot20\cdot\frac{31..39\cdot41..49\cdot51..59\cdot..\cdot191..199}{9!^{19}}\cdot \binom{29}{10}=\\ 10\cdot\frac{2..9\cdot11..19\cdot31..39\cdot41..49\cdot51..59\cdot..\cdot191..199}{9!^{19}}\cdot \binom{29}{10}=...$$ cardinality of $\{31,41,51,61,71,81,91,101,111,121,131,141,151,161,171,181,191\}$ is 17 $$...=10\cdot \frac{1..9}{9!}\cdot\frac{11..19}{9!}\cdot\frac{31..39}{9!}\cdot..\cdot\frac{191..199}{9!}\cdot \binom{29}{10}=\\ 10\cdot \binom{9}{9} \cdot \binom{19}{9} \cdot \binom{39}{9}\cdot .. \cdot \binom{199}{9} \cdot \binom{29}{10}$$
• I would appreciate the down-voters to at least comment ... – rtybase Aug 15 '17 at 8:29
Consider $V=(10k+1)*....*(10k+9)$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363512883316,
"lm_q1q2_score": 0.8569605402791093,
"lm_q2_score": 0.870597270087091,
"openwebmath_perplexity": 416.9148791756454,
"openwebmath_score": 0.780359148979187,
"tags": null,
"url": "https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1"
} |
Consider $V=(10k+1)*....*(10k+9)$.
By your reasoning, ${10k+9 \choose 9}=(10k+1)*....*(10k+9)/9!$ is an integer.
And $10(k+1)/10(k+1)$ is an integer.
So $(10k+1)*....*(10 (k+1))$ is divisible by $9!*10*(k+1)=10!*(k+1)$.
So $200!$ is divisible by $10!*1*10!*2*10!*3*.....*10!*19=(10!)^{20}*19!$
• Don't you mean it is divisible by $10!*(k+1)$ ? – Jaap Scherphuis Aug 15 '17 at 7:55
• Yeah, I guess I did. – fleablood Aug 15 '17 at 16:22
I computed the answer just for fun using Java, and it's indeed an integer!
41355508127520659545494261323391337886154686759988983912363570790033502473625361601944917427369977161391866491251801111884812210789772970682172860398969828337097889527312353089859289462934116034461288917394623420753412096000000
import java.math.BigDecimal;
import java.math.RoundingMode;
public class JustForFun{
public static void main(String []args){
BigDecimal thFact = new BigDecimal("1");
BigDecimal tenFact = null, ntFact = null, tenFactPow20 = null;
/* Computes 200! and stores it in a */
for (int i = 1; i <= 200; i++) {
thFact = thFact.multiply(new BigDecimal(i + ""));
/* stores 10! in b */
if (i == 10)
tenFact = thFact;
/* stores 19! in c */
if (i == 19)
ntFact = thFact;
}
tenFactPow20 = tenFact.pow(20);
tenFactPow20 = tenFactPow20.multiply(ntFact);
thFact = thFact.divide(tenFactPow20);
System.out.println(thFact);
}
}
Since I am pushing 70 yrs old, it seems appropriate to dinosaur-excerpt "Elementary Number Theory" 1938 (Uspensky and Heaslett).
For real # $$r$$, let $$\lfloor r\rfloor \equiv$$ the floor of $$r$$.
Let $$p$$ be any prime #.
Let $$V_p(n) : n ~\in ~\mathbb{Z^+} ~\equiv~$$ the largest exponent $$\alpha$$ such that $$p^{\alpha} | n$$.
That is, if $$\alpha = V_p(n),$$ then $$p^{(\alpha + 1)} \not | ~n.$$
From Uspensky and Heaslett, $$V_p(n!) = \left\lfloor\frac{n}{p^1}\right\rfloor ~+~ \left\lfloor\frac{n}{p^2}\right\rfloor ~+~ \left\lfloor\frac{n}{p^3}\right\rfloor ~+~ \left\lfloor\frac{n}{p^4}\right\rfloor \cdots$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363512883316,
"lm_q1q2_score": 0.8569605402791093,
"lm_q2_score": 0.870597270087091,
"openwebmath_perplexity": 416.9148791756454,
"openwebmath_score": 0.780359148979187,
"tags": null,
"url": "https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1"
} |
Clearly, given two positive integers $$A,B$$, $$~\frac{A}{B}$$ will be an integer $$\iff$$
for every prime # $$p$$ that occurs in the prime factorization of $$B$$,
$$V_p(B) \leq V_p(A).$$
It is immediate, that given the OP's original question, the only prime #'s that need to be checked are those prime #'s that are $$\leq 19.$$ Further, you can see at a glance the prime #'s 11, 13, 17, and 19 can not pose a problem.
Therefore, the problem reduces to manually applying Uspenky and Heaslett's formula to the numerator and denominator of the OP's original query with respect to the prime #'s 2,3,5,7.
Empirically, they each check out okay. Therefore, the fraction is an integer.
In your face $$21^{\text{st}}$$ century! | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363512883316,
"lm_q1q2_score": 0.8569605402791093,
"lm_q2_score": 0.870597270087091,
"openwebmath_perplexity": 416.9148791756454,
"openwebmath_score": 0.780359148979187,
"tags": null,
"url": "https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1"
} |
# Coprimality Relation is not Antisymmetric
## Theorem
Consider the coprimality relation on the set of integers:
$\forall x, y \in \Z: x \perp y \iff \gcd \set {x, y} = 1$
where $\gcd \set {x, y}$ denotes the greatest common divisor of $x$ and $y$.
Then:
$\perp$ is not antisymmetric.
## Proof
We have:
$\gcd \set {3, 5} = 1 = \gcd \set {5, 3}$
and so:
$3 \perp 5$ and $5 \perp 3$
However, it is not the case that $3 = 5$.
The result follows by definition of antisymmetric relation.
$\blacksquare$ | {
"domain": "proofwiki.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363503693294,
"lm_q1q2_score": 0.8569605262595482,
"lm_q2_score": 0.8705972566572504,
"openwebmath_perplexity": 567.9665807161637,
"openwebmath_score": 0.9990094304084778,
"tags": null,
"url": "https://proofwiki.org/wiki/Coprimality_Relation_is_not_Antisymmetric"
} |
# Limit of powers of $3\times3$ matrix
Consider the matrix
$$A = \begin{bmatrix} \frac{1}{2} &\frac{1}{2} & 0\\ 0& \frac{3}{4} & \frac{1}{4}\\ 0& \frac{1}{4} & \frac{3}{4} \end{bmatrix}$$
What is $$\lim_{n→\infty}A^n$$ ?
A)$$\begin{bmatrix} 0 & 0 & 0\\ 0& 0 & 0\\ 0 & 0 & 0 \end{bmatrix}$$ B)$$\begin{bmatrix} \frac{1}{4} &\frac{1}{2} & \frac{1}{2}\\ \frac{1}{4}& \frac{1}{2} & \frac{1}{2}\\ \frac{1}{4}& \frac{1}{2} & \frac{1}{2}\end{bmatrix}$$ C)$$\begin{bmatrix} \frac{1}{2} &\frac{1}{4} & \frac{1}{4}\\ \frac{1}{2}& \frac{1}{4} & \frac{1}{4}\\ \frac{1}{2}& \frac{1}{4} & \frac{1}{4}\end{bmatrix}$$ D)$$\begin{bmatrix} 0 &\frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\end{bmatrix}$$ E) The limit exists, but it is none of the above
The given answer is D). How does one arrive at this result?
• Did you try doing for small values of $n$. Take $n=2, 3,4$ and post your observations (if any) as well. – Vizag May 24 '19 at 11:12
By this question, we know that
$$$$A^n= \begin{pmatrix} 2^{-n} & n\cdot 2^{-n-1} - 2^{-n-1} + \frac12 & {1-\frac{n+1}{2^n}\over2}\\ 0 & {2^{-n}+1\over2} & {1-2^{-n}\over2} \\ 0 & {1-2^{-n}\over2} & {2^{-n}+1\over2} \end{pmatrix}.$$$$
It is thus clear that $$\lim_{n\to\infty} A^n = \begin{pmatrix} 0 &\frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\end{pmatrix}$$.
• $2^{-n}$ tends to 0, when $n->\infty$.....right?? – Srestha May 25 '19 at 19:27
• @Srestha that is correct – Maximilian Janisch May 25 '19 at 19:40
If you are in $$1$$, you have same probability to stay there or to pass to $$2$$, but no way to get back from there. Thus you are finally drifting to $$2$$.
States $$2$$ and $$3$$ are symmetrical: at long they will tend to be equally populated, independently of the starting conditions.
Therefore also starting from $$1$$ you will at long be split between $$2$$ and $$3$$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904406019226685,
"lm_q1q2_score": 0.8569530525404251,
"lm_q2_score": 0.8652240738888188,
"openwebmath_perplexity": 219.66960025485517,
"openwebmath_score": 0.9713488221168518,
"tags": null,
"url": "https://math.stackexchange.com/questions/3238042/limit-of-powers-of-3-times3-matrix/3239827"
} |
It’s often worth examining a matrix for obvious eigenvectors and eigenvalues, especially in artificial exercises, before plunging into computing and solving the characteristic equation. From the first column of $$A$$, we see that $$(1,0,0)^T$$ is an eigenvector with eigenvalue $$\frac12$$. The rows of $$A$$ all sum to $$1$$, so $$(1,1,1)$$ is an eigenvector with eigenvalue $$1$$. The remaining eigenvalue $$\frac12$$ can be found by examining the trace.
$$A$$ is therefore similar to a matrix of the form $$J=D+N$$, where $$D=\operatorname{diag}\left(1,\frac12,\frac12\right)$$ and $$N$$ is nilpotent of order no greater than 2. (If $$A$$ is diagonalizable, then $$N=0$$.) $$D$$ and $$N$$ commute, so expanding via the Binomial Theorem, $$(D+N)^n=D^n+nND^{n-1}$$. In the limit, $$D^n=\operatorname{diag}(1,0,0)$$ and the first column of $$N$$ is zero, so the second term vanishes. Thus, if $$A=PJP^{-1}$$, then $$\lim_{n\to\infty}A^n=P\operatorname{diag}(1,0,0)P^{-1}$$, but the right-hand side is just the projector onto the eigenspace of $$1$$. Informally, repeatedly multiplying a vector by $$A$$ leaves that vector’s component in the direction of $$(1,1,1)^T$$ fixed, while the remainder of the vector eventually dwindles away to nothing. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904406019226685,
"lm_q1q2_score": 0.8569530525404251,
"lm_q2_score": 0.8652240738888188,
"openwebmath_perplexity": 219.66960025485517,
"openwebmath_score": 0.9713488221168518,
"tags": null,
"url": "https://math.stackexchange.com/questions/3238042/limit-of-powers-of-3-times3-matrix/3239827"
} |
Since $$1$$ is a simple eigenvalue, there’s a shortcut for computing this projector that doesn’t require computing the change-of-basis matrix $$P$$: if $$\mathbf u^T$$ is a left eigenvector of $$1$$ and $$\mathbf v$$ a right eigenvector, then the projector onto the right eigenspace of $$1$$ is $${\mathbf v\mathbf u^T\over\mathbf u^T\mathbf v}.$$ (This formula is related to the fact that left and right eigenvectors with different eigenvalues are orthogonal.) We already have a right eigenvector, and a left eigenvector is easily found by inspection: the last two columns both sum to $$1$$, so $$(0,1,1)$$ is a left eigenvector of $$1$$. This gives us $$\lim_{n\to\infty}A^n = \frac12\begin{bmatrix}1\\1\\1\end{bmatrix}\begin{bmatrix}0&1&1\end{bmatrix} = \begin{bmatrix}0&\frac12&\frac12\\0&\frac12&\frac12\\0&\frac12&\frac12\end{bmatrix}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904406019226685,
"lm_q1q2_score": 0.8569530525404251,
"lm_q2_score": 0.8652240738888188,
"openwebmath_perplexity": 219.66960025485517,
"openwebmath_score": 0.9713488221168518,
"tags": null,
"url": "https://math.stackexchange.com/questions/3238042/limit-of-powers-of-3-times3-matrix/3239827"
} |
How to draw Fractal images of iteration functions on the Riemann sphere?
Prof. McClure, in the work "M. McClure, Newton's method for complex polynomials. A preprint version of a “Mathematical graphics” column from Mathematica in Education and Research, pp. 1–15 (2006)", discusses how Mathematica can be applied to iteration functions for obtaining the basins of attraction (or their fractal images). Below, I provide his code for the fractal image of the polynomial $p(z)=z^3-1$:
p[z_] := z^3 - 1;
theRoots = z /. NSolve[p[z] == 0, z]
cp = Compile[{{z, _Complex}}, Evaluate[p[z]]];
n = Compile[{{z, _Complex}}, Evaluate[Simplify[z - p[z]/p'[z]]]];
bail = 150;
orbitData = Table[
NestWhileList[n, x + I y, Abs[cp[#]] > 0.01 &, 1, bail],
{y, -1, 1, 0.01}, {x, -1, 1, 0.01}
];
numRoots = Length[Union[theRoots]];
sameRootFunc = Compile[{{z, _Complex}}, Evaluate[Abs[3 p[z]/p'[z]]]];
whichRoot[orbit_] :=
Module[{i, z},
z = Last[orbit]; i = 1;
Scan[If[Abs[z - #] < sameRootFunc[z], Return[i], i++] &, theRoots];
If[i <= numRoots, {i, Length[orbit]}, None]
];
rootData = Map[whichRoot, orbitData, {2}];
colorList = {{cc, 0, 0}, {cc, cc, 0}, {0, 0, cc}};
cols = rootData /. {
{k_Integer, l_Integer} :> (colorList[[k]] /. cc -> (1 - l/(bail + 1))^8),
None -> {0, 0, 0}
};
Graphics[{Raster[cols]}]
My main question is here. He nicely obtained the fractal images on the complex plane, while it would be an interesting challenge to obtain these images on the Riemann sphere, e.g.
It seems the complex plane in this case has been replaced by a sphere, but how? I will be thankful if someone could revise the code given above for obtaining such beautiful fractal images on the Riemann sphere. Any tips and tricks will be fully appreciated as well.
-
Would you care to share where your example spherical projection came from? – Mark McClure Dec 24 '12 at 13:04 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
As the other answers have shown, it's fairly easy to map an image onto a parametrized surface using textures. It can be a bit tricky, though, getting the image to mesh well with the transformation. J.M. hit on the crucial issue, namely that we compute the image using points that map to the sphere with minimal distortion. This answer is largely an expansion on his, although there are some differences and other ideas as well.
First, the article that Fazlollah refers to is some years old now, and the code can be improved in light of the many changes since V5, so let's start by showing how to generate regular Newton iteration images for general polynomials. Given a polynomial function $f(z)$, the following code computes the corresponding Newton's method iteration function $n$. It then defines the command limitInfo that iterates $n$ up to $50$ times from a starting point $z_0$ terminating when $|f(z)|$ is small and returning the last iterate and the number of iterates required for $|f(z)|$ to get small. It's compiled, listable and set to run in parallel, so it should be pretty fast. In addition to the function f, there are two numeric parameters to set, bail and r.
f = Function[z, z^3 - 1]; (* A very standard example *)
n = Function[z, Evaluate[Simplify[z - f[z]/f'[z]]]];
bail = 50;
(* bail is the number of iterates before bailing. *)
(* Doesn't have to be particularly large, *)
(* if there are only simple roots. *)
r = 0.01;
(* We assume that if |z-z0|<r, then we've *)
(* converged to the root z0. *)
limitInfo = With[{bail = bail, r = r, f = f, n = n},
Compile[{{z0, _Complex}},
Module[{z, cnt},
cnt = 0; z = z0;
While[Abs[f[z]] > r && cnt < bail,
z = n[z];
cnt = cnt + 1
];
{z, cnt}],
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]];
Since the function is listable and runs in parallel, we can simply apply it to a table of data on one fell swoop. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
step = 4/801; (* The denominator is essentially the resolution. *)
limitData = limitInfo[
Table[x + I*y, {y, 2, -2, -step}, {x, -2, 2, step}]]; // AbsoluteTiming
(* Out: {1.492716, Null} *)
Each element is a pair that indicates the limiting behavior and how long it took to get there.
limitData[[1, 1]]
(* Out: {-0.499835 + 0.866044 I, 5. + 0. I} *)
I guess we need a function that takes something like that and turns it into a color.
roots = z /. NSolve[f[z] == 0, z];
preColors = List @@@ Table[ColorData[61, k], {k, 1, Length[roots]}];
preColors = Append[preColors, {0.0, 0.0, 0.0}];
color = With[{bail = bail, roots = roots, preColors = preColors},
Compile[{{z, _Complex}, {cnt, _Complex}},
Module[{arg, time, i},
arg = Arg[z];
time = Abs[cnt];
i = 1;
Scan[If[Abs[z - #] < 0.1, Return[i], i++] &, roots];
Abs[preColors[[i]]*(cnt/bail)^(0.2)]
(* The exponent 0.2 adjusts the brightness of the image. *)]]
];
Now, we apply that function and generate the image.
colors = Apply[color, limitData, {2}];
Image[colors, ImageSize -> 2/step]
To map onto a sphere nicely, we'll discard the rectangular grid of points that we used above in favor of a collection of points that looks something like the following (although, we'll want higher resolution, of course):
step = Pi/12;
pts = Table[Cot[phi/2] Exp[I*theta],
{phi, step, Pi - step, step}, {theta, -Pi, Pi, step}];
ListPlot[{Re[#], Im[#]} & /@ Flatten[pts, 1],
AspectRatio -> Automatic, PlotRange -> All,
Epilog -> {Red, Circle[]}]
The expression $\cot(\phi/2) e^{i\theta}$ is the stereographic projection of a point expressed in spherical coordinates $(1,\phi,\theta)$ onto the plane. As a result, the corresponding points on the sphere are nicely distributed. Note, for example, that the number of points inside and outside of the unit circle are the same.
Graphics3D[{{Opacity[0.8], Sphere[]},
Point[Flatten[Table[{Cos[theta] Sin[phi], Sin[theta] Sin[phi], Cos[phi]},
{phi, step, Pi - step, step}, {theta, -Pi, Pi, step}], 1]]}] | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
Now, we increase the resolution and use the same limitInfo and color functions as before.
step = Pi/500;
limitData = limitInfo[Table[Cot[phi/2] Exp[I*theta],
{phi, step, Pi - step, step}, {theta, -Pi, Pi, step}]];
colors = Apply[color, limitData, {2}];
rect = Image[colors, ImageSize -> 4/step]
The image looks a bit different, but it's perfect for use as a spherical texture.
ParametricPlot3D[{Cos[theta] Sin[phi], Sin[theta] Sin[phi], Cos[phi]} ,
{theta, -Pi, Pi}, {phi, 0, Pi}, Mesh -> None, PlotPoints -> 100,
Boxed -> False, PlotStyle -> Texture[Show[rect]],
Lighting -> "Neutral", Axes -> False]
We can incorporate all of this into a Module.
newtonSphere[fIn_, var_, resolution_, bail_: 50, r_: 0.01] := Module[
{f, n, limitInfo, color, colors, roots, preColors, step, limitData, rect}, | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
f = Function[var, fIn];
n = Function[var, Evaluate[Simplify[var - f[var]/f'[var]]]];
limitInfo = With[{bailLoc = bail, rLoc = r, fLoc = f, nLoc = n},
Compile[{{z0, _Complex}},
Module[{z, cnt},
cnt = 0; z = z0;
While[Abs[fLoc[z]] > rLoc && cnt < bailLoc,
z = nLoc[z];
cnt = cnt + 1
];
{z, cnt}],
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]];
roots = z /. NSolve[f[z] == 0, z];
preColors = List @@@ Table[ColorData[61, k], {k, 1, Length[roots]}];
preColors = Append[preColors, {0.0, 0.0, 0.0}];
color = With[{bailLoc = bail, rootsLoc = roots, preColorsLoc = preColors},
Compile[{{z, _Complex}, {cnt, _Complex}},
Module[{arg, time, i},
arg = Arg[z];
time = Abs[cnt];
i = 1;
Scan[If[Abs[z - #] < 0.1, Return[i], i++] &, rootsLoc];
preColorsLoc[[i]]*(cnt/bailLoc)^(0.2)
]]
];
step = Pi/resolution;
limitData = limitInfo[
Table[Cot[phi/2] Exp[I*theta], {phi, step, Pi - step,
step}, {theta, -Pi, Pi, step}]];
colors = Apply[color, limitData, {2}];
rect = Image[colors, ImageSize -> 4/step];
ParametricPlot3D[{Cos[theta] Sin[phi], Sin[theta] Sin[phi],
Cos[phi]} ,
{theta, -Pi, Pi}, {phi, 0, Pi}, Mesh -> None, PlotPoints -> 100,
Boxed -> False, PlotStyle -> Texture[Show[rect]],
Lighting -> "Neutral", Axes -> False]
];
Now, if I had to guess, I'd say that example image in the original post was generated by a small perturbation of $z^8-z^2$.
newtonSphere[(2 z/3)^8 - (2 z/3)^2 + 1/10, z, 500]
Here are a few more examples.
pic1 = newtonSphere[z^2 - 1, z, 401];
SeedRandom[1];
pic2 = newtonSphere[Sum[RandomInteger[{-3, 5}] z^k, {k, 0, 8}], z, 400];
pic3 = newtonSphere[z^10 - z^5 - 1, z, 400, 200];
pic4 = newtonSphere[z^5 - z - 0.99, z, 400];
GraphicsGrid[{
{pic1, pic2},
{pic3, pic4}
}] | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
In the top row, we see that the result for quadratic polynomials is typically rather boring while that for a random degree 8 polynomial can be quite cool. On the bottom right, we see a black region. The color function is setup to default to black when none of the roots are detected. This can certainly happen; in fact the Newton iteration function for this example has an attractive orbit of period 6 leading to the quadratic like Julia set seen in the image. Sometimes black can occur simply because we didn't iterate enough, which is why I used the optional fourth argument for the image in the bottom left.
-
Now imagine those hanging from a fractal christmas tree. (+1) – Jens Dec 24 '12 at 17:40
Here is my modest attempt, based on the formulae for stereographic projection in this Wikipedia entry (where the north pole corresponds to the point at infinity) and using a technique similar to the one in this answer:
newtonRaphson = Compile[{{n, _Integer}, {c, _Complex}},
Arg[FixedPoint[(# - (#^n - 1)/(n #^(n - 1))) &, c, 30]]]
tex = Image[DensityPlot[
newtonRaphson[3, Cot[ϕ/2] Exp[I θ]], {θ, -π, π}, {ϕ, 0, π},
AspectRatio -> Automatic,
ColorFunction -> (Which[# < .3, Red, # > .7, Yellow, True, Blue] &),
Frame -> False, ImagePadding -> None, PlotPoints -> 400,
PlotRange -> All, PlotRangePadding -> None],
ImageResolution -> 256];
(* yes, I know that I could have used SphericalPlot3D[]... *)
ParametricPlot3D[{Sin[ϕ] Cos[θ], Sin[ϕ] Sin[θ], Cos[ϕ]}, {θ, -π, π}, {ϕ, 0, π},
Axes -> None, Boxed -> False, Lighting -> "Neutral", Mesh -> None,
PlotStyle -> Texture[tex], TextureCoordinateFunction -> ({#4, #5} &)] | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
-
Thanks for your response. There are two problems. 1. How can one zoom in on a particluar place on this sphere without lowering the quality to observe the fractal behaviour of the method? 2. The "spcae size" of the output image? In fact, how one can save as the output fractal image with low "disk size" without lowering the quality in EPS format? For example, for $n=8$, its size is more than 2MB! – Fazlollah Soleymani Nov 22 '12 at 21:16
You'll have to play with PlotPoints and ImageResolution on your own, of course... – Guess who it is. Nov 22 '12 at 23:13
Thanks, the problem is that when we reduce PlotPoints or ImageResolution, the quality gets down dramatically. I am looking for a fast way to obtain high quality pics, with small space size, just like the one given in the question. Rasterize@... is a good choice, but it disables the feature of rotating the 3D pic, and also gets the quality lower. – Fazlollah Soleymani Nov 23 '12 at 8:20
Well, I'd say this is one of those "no such thing as a free lunch" things. If you want to be able to zoom, you certainly need high resolution, which will demand more of your computer's resources... if you want small file sizes, you'll have to sacrifice quality somewhat. Scylla and Charybdis, you know... – Guess who it is. Nov 23 '12 at 9:49
img2 = ImageCrop[Image[Graphics[{Raster[cols]}, PlotRangePadding -> 0,
ImagePadding -> 0, ImageMargins -> 0]], {343, 343}];
SphericalPlot3D[1 , {u, 0, Pi}, {v, 0, 2 Pi}, Mesh -> None,
TextureCoordinateFunction -> ({#1, #2} &),
PlotStyle -> Directive[Specularity[White, 10], Texture[img2]],
Lighting -> "Neutral", Axes -> False, ImageSize -> 500]
where img2 is cropped version of the 2D image in OP's question.
Check Texture for more examples. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
where img2 is cropped version of the 2D image in OP's question.
Check Texture for more examples.
-
Thanks for your reply, but there are big white circles at the middle of each basin. They should not be here. Please rotate your image, and then you will see the incomplete fractal image. Can you solve this drawback? – Fazlollah Soleymani Nov 22 '12 at 20:59
img2 is the cropped version of your 2D image: img2=ImageCrop[ Image[Graphics[{Raster[cols]}, PlotRangePadding -> 0, ImagePadding -> 0, ImageMargins -> 0]], {343, 343}]. – kglr Nov 22 '12 at 21:50
Now that I think about it, one could have directly produced an image instead of going through the Raster[] route: Image[cols]. – Guess who it is. Nov 24 '12 at 14:12
I've decided to write a simplification+extension of Mark's routine as a separate answer. In particular, I wanted a routine that yields Riemann sphere fractals not only for Newton-Raphson, but also its higher-order generalizations (e.g. Halley's method).
I decided to use Kalantari's "basic iteration" family for the purpose. An $n$-th order member of the family looks like this:
$$x_{k+1}=x_k-f(x_k)\frac{\mathcal D_{n-1}(x_k)}{\mathcal D_n(x_k)}$$
where
$$\mathcal D_0(x_k)=1,\qquad\mathcal D_n(x_k)=\begin{vmatrix}f^\prime(x_k)&\tfrac{f^{\prime\prime}(x_k)}{2!}&\cdots&\tfrac{f^{(n-2)}(x_k)}{(n-2)!}&\tfrac{f^{(n-1)}(x_k)}{(n-1)!}\\f(x_k)&f^\prime(x_k)&\ddots&\vdots&\tfrac{f^{(n-2)}(x_k)}{(n-2)!}\\&f(x_k)&\ddots&\ddots&\vdots\\&&\ddots&\ddots&\vdots\\&&&f(x_k)&f^\prime(x_k)\end{vmatrix}$$
As noted in that paper, the basic family generalizes the Newton-Raphson iteration; $n=1$ corresponds to Newton-Raphson, while $n=2$ gives Halley's method. (Relatedly, see also Kalantari's work on polynomiography.)
Here's a routine for $\mathcal D_n(x)$:
iterdet[f_, x_, 0] := 1;
iterdet[f_, x_, n_Integer?Positive] := Det[ToeplitzMatrix[PadRight[{D[f, x], f}, n],
Table[SeriesCoefficient[Function[x, f]@\[FormalX], {\[FormalX], x, k}], {k, n}]]] | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8569513871207133,
"lm_q2_score": 0.8824278788223265,
"openwebmath_perplexity": 4975.400829917264,
"openwebmath_score": 0.42025870084762573,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.