text
stringlengths
1
2.12k
source
dict
#### References [G] F.R. Gantmacher, "The theory of matrices" , 1 , Chelsea, reprint (1977) (Translated from Russian) MR1657129 MR0107649 MR0107648 Zbl 0927.15002 Zbl 0927.15001 Zbl 0085.01001 [B] R. Bellman, "Introduction to matrix analysis" , McGraw-Hill (1960) MR0122820 Zbl 0124.01001 [MM] M. Marcus, H. Minc, "A survey of matrix theory and matrix inequalities" , Allyn & Bacon (1964) MR0162808 Zbl 0126.02404 [Ka] F.I. Karpelevich, "On the characteristic roots of matrices with non-negative entries" Izv. Akad. Nauk SSSR Ser. Mat. , 15 (1951) pp. 361–383 (In Russian) Given a real $n\times n$ matrix $A$ with non-negative entries, the question arises whether there are invertible positive diagonal matrices $D_1$ and $D_2$ such that $D_1AD_2$ is a doubly-stochastic matrix, and to what extent the $D_1$ and $D_2$ are unique. Such theorems are known as $DAD$-theorems. They are of interest in telecommunications and statistics, [C], [F], [Kr]. A matrix $A$ is fully decomposable if there do not exist permutation matrices $P$ and $Q$ such that $$PAQ = \left[ \begin{array}{cc} A_1 & 0 \\ B & A_2 \end{array} \right].$$ A $1 \times 1$ matrix is fully indecomposable if it is non-zero. Then for a non-negative square matrix $A$ there exist positive diagonal matrices $D_1$ and $D_2$ such that $D_1AD_2$ is doubly stochastic if and only if there exist permutation matrices $P$ and $Q$ such that $PAQ$ is a direct sum of fully indecomposable matrices [SK], [BPS].
{ "domain": "encyclopediaofmath.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9923043519112751, "lm_q1q2_score": 0.8511670253163481, "lm_q2_score": 0.8577681068080749, "openwebmath_perplexity": 198.1190532288455, "openwebmath_score": 0.9489653706550598, "tags": null, "url": "https://encyclopediaofmath.org/index.php?title=Stochastic_matrix&oldid=35214" }
# Sum of the inverse of a geometric series? I'm trying to solve for this summation: $$\sum_{j=0}^{i} {\left(\frac 1 2\right)^j}$$ This looks a lot like a geometric series, but it appears to be inverted. Upon plugging the sum into Wolfram Alpha, I find the answer to be $$2-2^{-i}$$ but I don't understand how it gets there. Am I able to consider this a geometric series at all? It almost seems closer to the harmonic series. • Welcome to Math Stack Exchange. Note $\frac 1 {2^j}=\left(\frac1 2\right)^j$ – J. W. Tanner Mar 25 '19 at 3:36 • Is $i$ some finite number? Or are you trying to find the value of the series for any arbitrary $i$? – kkc Mar 25 '19 at 3:36 • The reciprocals of each term of a geometric series is also a geometric one – lab bhattacharjee Mar 25 '19 at 3:37 • i is finite but arbitrary. The sum does not approach infinity. – bpryan Mar 25 '19 at 3:39 • My point of noting $\frac 1 {2^j}=\left(\frac 1 2\right)^j$ was not that $\frac 1 {2^j}$ was incorrect but rather that this is a geometric series – J. W. Tanner Mar 25 '19 at 3:49 $$\sum_{j=0}^{i} \frac{1}{2^j}=\sum_{j=0}^{i}\left( \frac{1}{2}\right)^j=\frac{1-\left(\frac12\right)^{i+1}}{1-\frac12}=2\left(1-\left(\frac12\right)^{i+1}\right)=2-\left(\frac12\right)^{i}=2-2^{-i}$$ As $$\frac1{2^j}=\left(\frac12\right)^j$$, this is also a geometric sum with the common ratio of $$r=\frac12$$. So you apply the formula for geometric sums $$\sum_{j=0}^nr^j=\frac{1-r^{n+1}}{1-r}$$to obtain the answer you have written.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9923043523664414, "lm_q1q2_score": 0.8511670220988138, "lm_q2_score": 0.8577681031721325, "openwebmath_perplexity": 272.26841448436033, "openwebmath_score": 0.8202585577964783, "tags": null, "url": "https://math.stackexchange.com/questions/3161357/sum-of-the-inverse-of-a-geometric-series" }
# Does the category of local rings with residue field $F$ have an initial object? Let $$F$$ be a field. Does the category $$C_F$$ of local rings with residue field isomorphic to $$F$$ have an initial object? This is, for instance, true if $$F=\mathbb{F}_{p}$$ for some prime $$p$$: If $$R$$ is a local ring with residue field $$\mathbb{F}_{p}$$, then any $$x\in\mathbb{Z}\setminus(p)$$ must map to something invertible under the morphism $$\mathbb{Z}\longrightarrow R$$. Hence that morphism factors as $$\mathbb{Z}\longrightarrow\mathbb{Z}_{(p)}\longrightarrow R$$; thus $$\mathbb{Z}_{(p)}$$ is the initial object. But what happens in the more general case? I guess it should be true at least if $$F$$ is of finite type over $$\mathbb{Z}$$, but I have no idea how to prove it. (EDIT - To avoid any confusion: I am talking about an initial object in the category of local rings $$R$$ with a fixed surjection $$R\longrightarrow F$$.) • That's a good question. It holds for $F=\mathbb Q$ as well – Maxime Ramzi May 15 '20 at 7:00 • @GeorgesElencwajg : I guess it depends on how you set up the category $C_F$ exactly : if it's just a full subcategory of rings, then you are right (composing $R\to F\to F$ gives two different morphisms $R\to F$ because $R\to F$ is surjective); but if you set it up as a subcategory of $CRing/F$ instead, it's not that clear. – Maxime Ramzi May 15 '20 at 7:27 • @Georges : I'm not saying $F$-algebras, on the other hand "the category of local fields with residue field $F$" could definitely mean "local rings $R$ with a fixed map $R\to F$ which exhibits $F$ as the residue field". So a subcategory of $CRing/F$, not $F/Cring$ – Maxime Ramzi May 15 '20 at 8:00 • @Georges: I meant it the way Maxime says it. – The Thin Whistler May 15 '20 at 8:10
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676454700792, "lm_q1q2_score": 0.8511614938445814, "lm_q2_score": 0.8688267881258485, "openwebmath_perplexity": 101.1014109815404, "openwebmath_score": 0.9474796056747437, "tags": null, "url": "https://math.stackexchange.com/questions/3675762/does-the-category-of-local-rings-with-residue-field-f-have-an-initial-object" }
Let $$\mathbb{F_4}=\{0,1,w,1+w\}$$ be the field of 4 elements. Suppose $$R$$ is the initial object in the category described in the question for the field $$\mathbb{F_4}$$. Then $$R$$ must contain some element $$x$$ which maps to $$w\in\mathbb{F_4}$$. Thus we have a map $$f\colon S\to R$$, where $$S=\mathbb{Z}[y]_M$$, sending $$y \mapsto x$$. Here $$M$$ is the maximal ideal of $$\mathbb{Z}[y]$$ containing $$2,1+y+y^2$$. The following composition must be the identity: $$R \to S \stackrel f \to R$$ Thus $$R=S/I$$ for some ideal $$I\subset M$$. Further we know $$I\neq 0$$ as $$S$$ cannot be the initial object: there are multiple distinct maps $$S\to S$$, such as the identity map and the map sending $$y\mapsto y+2$$. Under the composition $$S \stackrel f \to R\to S$$, we have $$y\mapsto p/q$$, for some $$p,q$$ integer polynomials in $$y$$. We know $$p/q$$ is not a rational number as $$p/q\mapsto w\in\mathbb{F_4}$$. Thus $$p/q$$ is a non-constant rational function in one variable, taking infinitely many values, which cannot all satisfy the same polynomial over the integers. On the other hand, as $$I\neq 0$$ there must be a polynomial over the integers satisfied by $$p/q$$. This gives us the desired contradiction. • This is so clever! How did you come up with this approach? – diracdeltafunk May 16 '20 at 2:26 • Thankyou. It felt like if there was an initial object it ought to be S, but at the same time it couldn't be. Playing those off against each other and using that S is a well behaved, concretely described ring led to the contradiction. – tkf May 16 '20 at 3:55 The category $$C_{F}$$ possesses a weak initial object $$I_{F}$$, i.e. an object that is unique up to not necessarily unique isomorphism. Let $$F$$ be a field and $$L$$ be its minimal subfield (the smallest subfield contained in $$F$$). Then either $$L=\mathbb{F}_{p}$$ for some prime $$p$$ or $$L=\mathbb{Q}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676454700792, "lm_q1q2_score": 0.8511614938445814, "lm_q2_score": 0.8688267881258485, "openwebmath_perplexity": 101.1014109815404, "openwebmath_score": 0.9474796056747437, "tags": null, "url": "https://math.stackexchange.com/questions/3675762/does-the-category-of-local-rings-with-residue-field-f-have-an-initial-object" }
Assume first that $$F$$ is of finite type over $$L$$. Let $$n\in\mathbb{N}$$ be the smallest natural number so that $$F=L[x_{1},...,x_{n}]/\mathfrak{m}$$ for some maximal ideal $$\mathfrak{m}\subseteq L[x_{1},...,x_{n}]$$. Let $$\overline{x}_{i}$$ be the image of $$x_{i}\in L[x_{1},...,x_{n}]$$ in $$F$$. Let $$\zeta:R\longrightarrow F$$ be a surjection where $$R$$ is a local ring. Since every $$\overline{x}_{i}$$ has a (not necessarily unique) preimage $$\zeta^{-1}(\overline{x}_{i})\in R$$, there is a (not necessarily unique) morphism $$\kappa:\mathbb{Z}[x_{1},...,x_{n}]\longrightarrow R$$ that fits into a commutative diagram $$\require{AMScd}$$ $$\begin{CD} \mathbb{Z}[x_{1},...,x_{n}]@>{\kappa}>>R\\ @V{\pi}VV @VV{\zeta}V\\ L[x_{1},...,x_{n}] @>>{\chi}>F \end{CD}$$ Let $$\mathfrak{i}:=\chi^{-1}\pi^{-1}(0)=\pi^{-1}(\mathfrak{m})$$. The ideal $$\mathfrak{i}$$ is always prime; it is maximal if and only if $$L=\mathbb{F}_{p}$$ for some prime $$p$$. Since $$R$$ is local, every element of $$\mathbb{Z}[x_{1},...,x_{n}]$$ is mapped by $$\kappa$$ onto something invertible in $$R$$. Hence $$\kappa$$ factors as $$\begin{CD} \mathbb{Z}[x_{1},...,x_{n}] @>>>\mathbb{Z}[x_{1},...,x_{n}]_{(\mathfrak{i})} @>{\lambda}>> R \end{CD}$$ Thus $$I_{F}:=\mathbb{Z}[x_{1},...,x_{n}]_{(\mathfrak{i})}$$ is a weak initial object in the category $$C_{F}$$. Note that the assignment $$\kappa\longleftrightarrow\lambda$$ is unique in both ways: To each choice of $$\kappa$$ there is a unique $$\lambda$$ and vice-versa. Assume next that $$F$$ is of infinite type over $$L$$. Then $$F$$ is the direct limit of all morphisms $$F'\longrightarrow F''$$, where $$F',F''$$ are fields of finite type over $$L$$. Since the construction of $$I_{-}$$ is functorial and compatible with direct limits, $$I_{F}$$ can be defined as $$I_{F}:=\lim_{F'\text{ of fin. t.}/L}I_{F'}$$. The initial object is strong, i.e. unique up to unique isomorphism, if and only if $$F=L$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676454700792, "lm_q1q2_score": 0.8511614938445814, "lm_q2_score": 0.8688267881258485, "openwebmath_perplexity": 101.1014109815404, "openwebmath_score": 0.9474796056747437, "tags": null, "url": "https://math.stackexchange.com/questions/3675762/does-the-category-of-local-rings-with-residue-field-f-have-an-initial-object" }
The initial object is strong, i.e. unique up to unique isomorphism, if and only if $$F=L$$. Namely, if $$F=L$$, then $$n=0$$ and the unique morphism $$\kappa:\mathbb{Z}\longrightarrow R$$ induces a unique morphism $$\lambda:\mathbb{Z}_{(\mathfrak{i})}\longrightarrow R$$. Else, if $$F\neq L$$, then $$n\geq 1$$ and for any $$i\in\{1,...,n\}$$ and any $$s\in\mathfrak{i}\setminus\{0\}$$, the map $$\xi_{i,s}:x_{i}\mapsto x_{i}+s$$ yields a nontrivial automorphism $$I_{F}\longrightarrow I_{F}$$ that commutes with the surjection $$I_{F}\longrightarrow F$$. My guess is that the $$\xi_{i,s}$$ actually generate the whole group $$\operatorname{Aut}(I_{F})$$, but I have yet to figure out a proof for this...
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676454700792, "lm_q1q2_score": 0.8511614938445814, "lm_q2_score": 0.8688267881258485, "openwebmath_perplexity": 101.1014109815404, "openwebmath_score": 0.9474796056747437, "tags": null, "url": "https://math.stackexchange.com/questions/3675762/does-the-category-of-local-rings-with-residue-field-f-have-an-initial-object" }
# Find an equivalent sequence as $n\to +\infty$ of $u_1>0, u_{n+1} = \frac{u_n}{n} + \frac{1}{n^2}$ Let $$u_1>0$$ be a real number. Let us consider $$(u_n)_{n\geq 1}$$ the sequence such as: $$\forall n \geq 1, u_{n+1} = \frac{u_n}{n} + \frac{1}{n^2}\quad (\star)$$ Find an equivalent of $$u_n$$ as $$n\to +\infty$$. So I found a way to show that $$u_n \sim \frac{1}{n^2}$$, but I'm quite unhappy with this method because I feel like I found it by chance without understanding anything (I did a lot of trials and found this) My method: I showed by induction that $$u_n \leq (u_1+1)$$. Thus, $$u_n\to 0$$ considering $$(\star)$$. Then, $$nu_{n+1} = u_n + 1/n$$. Thus (since $$n+1 \sim n$$), $$nu_n \to 0$$. To end with, I have $$n^2u_{n+1} = nu_n + 1$$. Thus, $$(n+1)^2 u_n \sim n^2u_n \to 1$$. How would you solve such a problem? Is there any more intuitive method that one may have done? • It is right actually. I think elementary proof has you find is rather good. Where the intuition comes it that $u_n$ is ponderated with $n$ and $u_{n+1}$ can be with $n^2$ by multiplying all the equation by $n^2$. So I think that ponderation lead you to the equivalent. – EDX Jun 4 '20 at 15:39 1. There is a heuristic argument which is useful for guessing the behavior of $$u_n$$: Rewrite the recurrence relation as $$u_{n+1} - u_n = \frac{u_n}{n} - u_n + \frac{1}{n^2}.$$ Its continuum analogue is the following differential equation: $$y' = \frac{y}{x} - y + \frac{1}{x^2}.$$ Using the standard method, this equation can be solved as: $$y(x) = x e^{-x} \int \frac{e^x}{x^3} \, \mathrm{d}x.$$ Then L'Hospital's Rule then tells that $$y(x) \sim x^{-2}$$ as $$x \to \infty$$. From this observation, we may as well expect that $$u_n \sim n^{-2}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676478446031, "lm_q1q2_score": 0.8511614892527972, "lm_q2_score": 0.8688267813328977, "openwebmath_perplexity": 300.08674850912087, "openwebmath_score": 0.9996277093887329, "tags": null, "url": "https://math.stackexchange.com/questions/3705602/find-an-equivalent-sequence-as-n-to-infty-of-u-10-u-n1-fracu-nn" }
2. The above ansatz suggests that, in the recurrence relation for $$u_{n+1}$$, $$\frac{1}{n^2}$$ is the dominating term and $$\frac{u_n}{n}$$ is much smaller as $$n\to\infty$$. In particular, nesting this relation will produce an expansion with ever decreasing terms. This idea can be easily tested as follows: Let $$r_n = (n-1)u_n$$. Then for $$n \geq 2$$, $$r_n = (n-1)u_{n} = \frac{1}{n-1} + \frac{r_{n-1}}{n-1}.$$ From this, we get \begin{align*} r_n &= \frac{1}{n-1} + \frac{r_{n-1}}{n-1} \\ &= \frac{1}{n-1} + \frac{1}{(n-1)(n-2)} + \frac{r_{n-2}}{(n-1)(n-2)} \\ &= \frac{1}{n-1} + \frac{1}{(n-1)(n-2)} + \frac{1}{(n-1)(n-2)(n-3)} + \frac{r_{n-3}}{(n-1)(n-2)(n-3)} \\ &\qquad\vdots\\ &= \sum_{k=1}^{n-2} \frac{1}{(n-1)\cdots(n-k)} + \frac{r_2}{(n-1)!} \end{align*} Using this, it is not hard to conclude that $$(n-1)^2 u_n \to 1$$ as $$n\to\infty$$, and in fact, we can extract an asymptotic expansion of $$u_n$$ up to any prescribed order $$\mathcal{O}(n^{-M})$$. This is similar to Sangchul Lee's approach. If we multiply $$n!$$ both sides of the recurrence, we obtain $$n!u_{n+1}=(n-1)!u_n+\frac{n!}{n^2}.$$ Applying the above repeatedly, $$u_{n+1}=\frac1{n!} \sum_{k=1}^n \frac{k!}{k^2} + \frac{u_1}{n!}.$$ • That's a really good method! Thanks – MiKiDe Jun 7 '20 at 9:17
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676478446031, "lm_q1q2_score": 0.8511614892527972, "lm_q2_score": 0.8688267813328977, "openwebmath_perplexity": 300.08674850912087, "openwebmath_score": 0.9996277093887329, "tags": null, "url": "https://math.stackexchange.com/questions/3705602/find-an-equivalent-sequence-as-n-to-infty-of-u-10-u-n1-fracu-nn" }
# Given $3$ types of coins, how many ways can one select $20$ coins so that no coin is selected more than $8$ times. So I was given this question. Given $3$ types of coins, how many ways can one select $20$ coins so that no coin is selected more than $8$ times. First I make $x_1 + x_2 + x_3 = 20$ Then $0 \leq x_i \leq 8$ Then we use the Inclusion exclusion principle Let $A_i$ be the non-negative integer solutions to $x_1 + x_2 + x_3 = 20$ $x_i \geq 9$. Then use inclusion exclusion formula to find $N(A_1 \bigcap A_2 \bigcap A_3)$ What i don't get is how to apply the inclusion exclusion formula. I know the process to get to it but not how to apply it. • Possible duplicate of Inclusion-exclusion principle: Number of integer solutions to equations – Cloverr Feb 2 '16 at 18:27 • You can also find the coefficient of $x^{20}$ in $(1+x+\ldots + x^8)^3$. Then you get the right answer as well. – Henno Brandsma Feb 2 '16 at 18:27 • @Nilanjan thats a different question – Zero Feb 2 '16 at 18:31 • Do you have to use inclusion-exclusion? The problem is small enough in scale to permit more straightforward approaches. Heck, even plain enumeration will do. – Brian Tung Feb 2 '16 at 18:56 • I've added inclusion-exclusion to my answer. Interestingly, the approach by generating function arrives at the same expression! – Brian Tung Feb 2 '16 at 19:03 Instead of trying to apply formulas take a look at the general theory. Suppose $a_j\le x_j\le b_j$ for $j=1,2,3$. $$(x^{a_1}+x^{a_1+1}+\cdots+x^{b_1})(x^{a_2}+x^{a_2+1}+\cdots+x^{b_2})(x^{a_3}+x^{a_3+1}+\cdots+x^{b_3})$$ Every solution of $x_1+x_2+x_3=n$ with the constraints $a_j\le x_j\le b_j$ contributes $1$ to the coefficient of $x^n$ in the above expression. So the number of solutions is the coefficient of $x^n$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676466573412, "lm_q1q2_score": 0.851161484893855, "lm_q2_score": 0.8688267779364222, "openwebmath_perplexity": 153.6125278349235, "openwebmath_score": 0.8478825092315674, "tags": null, "url": "https://math.stackexchange.com/questions/1637719/given-3-types-of-coins-how-many-ways-can-one-select-20-coins-so-that-no-coi" }
Substituting accordingly we see we require the coefficient of $x^{20}$ in $$(1+x+\cdots+x^8)^3={(1-x^9)^3\over (1-x)^3}$$ or equivalently in $$(1-3x^9+3x^{18})\sum_{k=0}^{20}\binom{2+k}{k}x^k$$ which equals $$\binom{22}{20}-3\binom{13}{11}+3\binom{4}{2}$$ • The equation $x_1 + x_2 + x_3 = 20$ has only $\binom{22}{20}$ solutions in the non-negative integers. Check your calculations. – N. F. Taussig Feb 2 '16 at 18:42 • I replaced $3$ with $20$ erroneously.. – Jack's wasted life Feb 2 '16 at 18:45 There are $15$ different ways to select $20$ coins of three different types, with no type having more than $8$ specimens. There are a number of different methods to obtain this number. One is to condition on the number of coins of Type $1$. If there are $4$ coins of Type $1$, then there is only one way to obtain $16$ coins of the other two types; if there are $5$ coins of Type $1$, then there are two ways to obtain $15$ coins of the other two types; and so on. $1+2+3+4+5 = 15$. Another way is to consider that the solutions are the intersection of the lattice cube $\{(x, y, z)) \in \mathbb{N}_{\geq 0}^3 \mid 0 \leq x, y, z \leq 8\}$ with the plane $x+y+z = 20$. If you have any facility with this, you will notice that the intersection is a triangular lattice of side $5$, so the number of solutions is the $5$th triangular number, $15$. ETA: Finally, we can use inclusion-exclusion, although I find it less straightforward. The number of non-negative integer solutions to $x+y+z = 20$ is, by the usual stars-and-bars approach, $\binom{22}{2} = 231$. However, this counts solutions where at least one of $x, y, z > 8$. The number of solutions where $x > 8$ is, again by stars-and-bars, $\binom{13}{2} = 78$, and there are three coin types, so we must subtract $3 \times 78 = 234$ to obtain $-3$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676466573412, "lm_q1q2_score": 0.851161484893855, "lm_q2_score": 0.8688267779364222, "openwebmath_perplexity": 153.6125278349235, "openwebmath_score": 0.8478825092315674, "tags": null, "url": "https://math.stackexchange.com/questions/1637719/given-3-types-of-coins-how-many-ways-can-one-select-20-coins-so-that-no-coi" }
Again, however, this double-counts three cases where at least two of $x, y, z > 8$. There are $\binom{4}{2} = 6$ ways in which both $x, y > 8$, but there are three different pairs of coin types, so we must add back $3 \times 6 = 18$ to get $15$. There are no ways to have $x, y, z > 8$ and still add up to $20$, so $15$ is the final result. Note, interestingly, that this is the same expression that Jack's wasted life arrived at, although that method was by way of generating functions. • One typographical error: In the last sentence of the first paragraph of the inclusion-exclusion argument, you meant $x > 8$ rather than $x > 0$. – N. F. Taussig Feb 2 '16 at 20:14 • @N.F.Taussig: Thanks for that. – Brian Tung Feb 3 '16 at 18:58 Let $8-x_i=:y_i$ $(1\leq i\leq3)$. Then we want all $y_i\geq0$ and $y_1+y_2+y_3=4$. There are $${4+3-1\choose 3-1}={6\choose 2}=15$$ ways to choose admissible $y_i$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676466573412, "lm_q1q2_score": 0.851161484893855, "lm_q2_score": 0.8688267779364222, "openwebmath_perplexity": 153.6125278349235, "openwebmath_score": 0.8478825092315674, "tags": null, "url": "https://math.stackexchange.com/questions/1637719/given-3-types-of-coins-how-many-ways-can-one-select-20-coins-so-that-no-coi" }
IntMath Home » Forum home » Integration » Different parabola equation when finding area # Different parabola equation when finding area [Solved!] ### My question In chapter 3, compare example 1 to example 3. Why isnt (3 - x^2) the correct parabolic equation for example 3? ### Relevant page 3. Areas Under Curves ### What I've done so far When I went to solve example 3, I used the equation (3 - x^2) for the parabola. I found out that was not the correct parabolic equation upon looking at your solution. I am not sure why my equation cannot be used even though the vertex is not at (0,3), i.e. not lined up with the origin? The height is still three so I thought it would be okay. X In chapter 3, compare example 1 to example 3. Why isnt (3 - x^2) the correct parabolic equation for example 3? Relevant page <a href="https://www.intmath.com/integration/3-area-under-curve.php">3. Areas Under Curves</a> What I've done so far When I went to solve example 3, I used the equation (3 - x^2) for the parabola. I found out that was not the correct parabolic equation upon looking at your solution. I am not sure why my equation cannot be used even though the vertex is not at (0,3), i.e. not lined up with the origin? The height is still three so I thought it would be okay. ## Re: Different parabola equation when finding area @phinah: Your parabola will certainly have the correct height, but let's consider its width. Where does your parabola intersect with the x-axis? What width at the x-axis does this give you? X @phinah: Your parabola will certainly have the correct height, but let's consider its width. Where does your parabola intersect with the <i>x</i>-axis? What width at the <i>x</i>-axis does this give you? ## Re: Different parabola equation when finding area I realized after I inquired to you that this upside down parabola is a shift one unit to the right on the x-axis. So I came up with the -(x-1)^2 + 3.
{ "domain": "intmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676460637103, "lm_q1q2_score": 0.851161482714384, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 1080.6333282970159, "openwebmath_score": 0.6964902877807617, "tags": null, "url": "https://www.intmath.com/forum/integration-29/different-parabola-equation-when-finding-area:151" }
So I came up with the -(x-1)^2 + 3. But the parabola bottom would start at (0,2) Would that be an issue? NOTE: The integral would be (-x^2 + 2x + 2) dx for a value of 16/3 square units. X I realized after I inquired to you that this upside down parabola is a shift one unit to the right on the x-axis. So I came up with the -(x-1)^2 + 3. But the parabola bottom would start at (0,2) Would that be an issue? NOTE: The integral would be (-x^2 + 2x + 2) dx for a value of 16/3 square units. ## Re: Different parabola equation when finding area @phinah: I edited your answer to include math formatting, which you are encouraged to do. Once again, the width at the x-axis is a problem (both with your original attempt and with this revision.) a. Where does your parabola intersect with the x-axis when we use y=3-x^2? b. What width at the x-axis does this give you? You may find some of the background in this article useful: How to find the equation of a quadratic function from its graph X @phinah: I edited your answer to include math formatting, which you are encouraged to do. Once again, the width at the x-axis is a problem (both with your original attempt and with this revision.) a. Where does your parabola intersect with the x-axis when we use y=3-x^2? b. What width at the x-axis does this give you? You may find some of the background in this article useful: <a href="https://www.intmath.com/blog/mathematics/how-to-find-the-equation-of-a-quadratic-function-from-its-graph-6070">How to find the equation of a quadratic function from its graph</a> ## Re: Different parabola equation when finding area 3 - x^2 = 0 x = +/- sqrt 3' So my length is 3.46 when it should only be 2. X 3 - x^2 = 0 x = +/- sqrt 3' So my length is 3.46 when it should only be 2. ## Re: Different parabola equation when finding area Correction x = + or - sqrt 3 X Correction x = + or - sqrt 3 ## Re: Different parabola equation when finding area
{ "domain": "intmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676460637103, "lm_q1q2_score": 0.851161482714384, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 1080.6333282970159, "openwebmath_score": 0.6964902877807617, "tags": null, "url": "https://www.intmath.com/forum/integration-29/different-parabola-equation-when-finding-area:151" }
X Correction x = + or - sqrt 3 ## Re: Different parabola equation when finding area @Phinah: Yes, correct, so it's not possible to use y=3-x^2 for your function. But it is possible to use one that passes through (0, 3). Can you find it? BTW, to achieve "plus or minus" in the math entry system, you enter +- So your answer would look like: x = +- sqrt(3). You can see all the syntax for mathematical symbols here: ASCIIMath syntax. X @Phinah: Yes, correct, so it's not possible to use y=3-x^2 for your function. But it is possible to use one that passes through (0, 3). Can you find it? BTW, to achieve "plus or minus" in the math entry system, you enter +- So your answer would look like: x = +- sqrt(3). You can see all the syntax for mathematical symbols here: <a href="https://www.intmath.com/help/send-math-email-syntax.php">ASCIIMath syntax</a>. ## Re: Different parabola equation when finding area I cannot. X I cannot. ## Re: Different parabola equation when finding area Well, since we know the width of the arch must be 2 m, then we know it must go through (-1, 0) and (1, 0). So using the general form of a quadratic, y = ax^2 + bx + c, you can substitute the 3 known points in and thus find the values of a, b and c. Have a go! X Well, since we know the width of the arch must be 2 m, then we know it must go through (-1, 0) and (1, 0). So using the general form of a quadratic, y = ax^2 + bx + c, you can substitute the 3 known points in and thus find the values of a, b and c. Have a go! ## Re: Different parabola equation when finding area @murray: phinah hasn't responded. Would you like me to answer this? X @murray: phinah hasn't responded. Would you like me to answer this? ## Re: Different parabola equation when finding area X @stephenB: Sure, please go ahead! ## Re: Different parabola equation when finding area Okay, so we have the general formula y = ax^2 + bx + c, and the 3 points it needs to go through, (-1, 0), (1, 0) and (0, 3).
{ "domain": "intmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676460637103, "lm_q1q2_score": 0.851161482714384, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 1080.6333282970159, "openwebmath_score": 0.6964902877807617, "tags": null, "url": "https://www.intmath.com/forum/integration-29/different-parabola-equation-when-finding-area:151" }
y = ax^2 + bx + c, and the 3 points it needs to go through, (-1, 0), (1, 0) and (0, 3). Plugging them in: 0 = a(1)^2 + b(1) + c = a + b + c ... [1] 0 = a(-1)^2 + b(-1) + c = a - b + c ... [2] 3 = a(0)^2 + b(0) + c = c ... [3] 0 = 2a + 2c which means a + c = 0 ...[4] From [3] we know c=3, so from [4], a=-3 and then plugging those into [1] gives b=0. That is, the parabola's equation is: y = -3x^2 + 3 X Okay, so we have the general formula y = ax^2 + bx + c, and the 3 points it needs to go through, (-1, 0), (1, 0) and (0, 3). Plugging them in: 0 = a(1)^2 + b(1) + c = a + b + c ... [1] 0 = a(-1)^2 + b(-1) + c = a - b + c ... [2] 3 = a(0)^2 + b(0) + c = c ... [3] 0 = 2a + 2c which means a + c = 0 ...[4] From [3] we know c=3, so from [4], a=-3 and then plugging those into [1] gives b=0. That is, the parabola's equation is: y = -3x^2 + 3 ## Re: Different parabola equation when finding area Thanks, stephenB. I think if phinah tries the equation you found, he will find the area of the parabolic glass pane will the the same as in my solution. X Thanks, stephenB. I think if phinah tries the equation you found, he will find the area of the parabolic glass pane will the the same as in my solution. ## Re: Different parabola equation when finding area Phinah DID respond with the correct solution Murray BEFORE Stephen did. I do not know why it did not come through X Phinah DID respond with the correct solution Murray BEFORE Stephen did. I do not know why it did not come through ## Re: Different parabola equation when finding area Hi Phinah. Sorry - it must have been a temporary database glitch or something. X Hi Phinah. Sorry - it must have been a temporary database glitch or something. ## Re: Different parabola equation when finding area Okay Murray. Thank you. X Okay Murray. Thank you.
{ "domain": "intmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676460637103, "lm_q1q2_score": 0.851161482714384, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 1080.6333282970159, "openwebmath_score": 0.6964902877807617, "tags": null, "url": "https://www.intmath.com/forum/integration-29/different-parabola-equation-when-finding-area:151" }
# Partial sum of the harmonic series between two consecutive fibonacci numbers I was playing around with some calculations and I noticed that the partial sum of the harmonic series: $$s_n=\sum_{k=F_n}^{F_{n+1}}\frac 1 k$$ where $F_n$ and $F_{n+1}$ are two consecutive Fibonacci numbers have some interesting properties. It is close to $\frac 1 2$ for small values of $n$ and it seems to converge to a value less than $0.5$ for large $n$. This is what I've got so far: $$\lim_{n\to\infty} s_n\approx 0.481212$$ I googled a bit to see if there is some theorems or resources for this, and found nothing. I suspect that the series might converge to a smaller number and I may have reached some computational limitations which led to the conclusion that the limit is close to $\frac 1 2$. So my questions are: 1. Can we show that the series converge to a non-zero value? 2. In case the first answer is yes, can the limit be expressed in a closed form? In terms of the harmonic numbers $H_n$, your sequence is $$s_n = H_{F_{n+1}} - H_{F_n-1}$$ As $n \to \infty$ it's known that $H_n = \log n + \gamma + o(1)$, so \begin{align} s_n &= \log F_{n+1} + \gamma + o(1) - \log(F_n-1) - \gamma - o(1) \\ &= \log F_{n+1} - \log(F_n-1) + o(1). \end{align} Now $F_m \sim \varphi^m/\sqrt{5}$, where $\varphi$ is the golden ratio, so using the fact that $a \sim b \implies \log a = \log b + o(1)$ we have \begin{align} s_n &= \log(\varphi^{n+1}/\sqrt{5}) - \log(\varphi^{n}/\sqrt{5}) + o(1) \\ &= \log \varphi + o(1). \end{align} In other words, $$\lim_{n \to \infty} \sum_{k=F_n}^{F_{n+1}} \frac{1}{k} = \log \varphi.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676472509722, "lm_q1q2_score": 0.8511614737636568, "lm_q2_score": 0.8688267660487572, "openwebmath_perplexity": 225.12238835619792, "openwebmath_score": 0.9867463707923889, "tags": null, "url": "https://math.stackexchange.com/questions/1991212/partial-sum-of-the-harmonic-series-between-two-consecutive-fibonacci-numbers" }
In other words, $$\lim_{n \to \infty} \sum_{k=F_n}^{F_{n+1}} \frac{1}{k} = \log \varphi.$$ • It might be better to use $s_n\approx ...$ instead of $s_n=...$ – polfosol Oct 30 '16 at 8:14 • @polfosol, no I disagree. Everything in my answer is rigorous following the definitions of $\sim$ and little-o notation. – Antonio Vargas Oct 30 '16 at 8:14 • @polfosol, see, for example, here for the definitions. – Antonio Vargas Oct 30 '16 at 8:15 • I didn't notice that. Fair enough – polfosol Oct 30 '16 at 8:15 • I will add a comment that ought to have been done : thanks for your very precise and nice answer. – Jean Marie Oct 30 '16 at 8:35 The Fibonacci numbers increase as $\phi^n$ (where $\phi$ is the golden mean $\frac{1+\sqrt{5}}{2}$), and harmonic numbers increase as $\log n$ (i.e., the natural log). Therefore, the difference between the harmonic numbers for successive Fibonacci numbers will approach $\log\phi \approx 0.481211825...$ To expand a bit, the Fibonacci numbers can be expressed as $\frac{\phi^n - (1-\phi)^n}{\sqrt{5}}$. (Try it! The fact that the equation $f(x+2) - f(x+1) - f(x) = 0$ requires a sum of powers of $\phi$ and $1-\phi$ follows from the fact that these are the solutions to the equation $x^2 - x - 1 = 0$, and the coefficients come from f(1) = f(2) = 1.) The second term vanishes, so large Fibonacci numbers can be approximated quite well as $\frac{\phi^n}{\sqrt{5}}$. Since one definition of the natural logarithm is the integral from 1 to the parameter of the function $t^{-1}$, the harmonic numbers can be approximated as the natural logarithm, and in fact the difference approaches a constant (called $\gamma$, about 0.577). If you're not familiar with integrals, the fact that the harmonic numbers increase as a logarithm is suggested by Oresme's proof that the harmonic series diverges...
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676472509722, "lm_q1q2_score": 0.8511614737636568, "lm_q2_score": 0.8688267660487572, "openwebmath_perplexity": 225.12238835619792, "openwebmath_score": 0.9867463707923889, "tags": null, "url": "https://math.stackexchange.com/questions/1991212/partial-sum-of-the-harmonic-series-between-two-consecutive-fibonacci-numbers" }
$$1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \frac{1}{9} + \cdots > 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{16} + \cdots$$ ...and it just so happens that that logarithm is the natural logarithm. So if you accept that for very large n, the harmonic numbers approach $\log n$, and that the Fibonacci numbers approach $\frac{\phi^n}{\sqrt{5}}$, then you get for two successive... $$\log\left(\frac{\phi^{n+1}}{\sqrt{5}}\right) - \log\left(\frac{\phi^n}{\sqrt{5}}\right) = \log\left(\frac{\phi^{n+1}}{\phi^n}\right) = \log\phi$$ ($\log x - \log y = \log \frac{x}{y}$ is a natural inverse of $\frac{e^x}{e^y} = e^{x-y}$.) • I will mark this as accepted if you add some more details ;) – polfosol Oct 30 '16 at 8:10 • ...har-r-r-rumph. – user361424 Oct 30 '16 at 8:36 • @user361424 very nice answer, a compliment that ought to have been done by the proposer before asking for "more details" ! – Jean Marie Oct 30 '16 at 8:42 • @JeanMarie It seems I am stuck with this – polfosol Oct 30 '16 at 8:44 • I think this might be the inverse fastest gun in the west problem... (this answer was originally just the first paragraph, and without the parentheticals explaining the golden mean and clarifying the natural log). – user361424 Oct 30 '16 at 8:45
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676472509722, "lm_q1q2_score": 0.8511614737636568, "lm_q2_score": 0.8688267660487572, "openwebmath_perplexity": 225.12238835619792, "openwebmath_score": 0.9867463707923889, "tags": null, "url": "https://math.stackexchange.com/questions/1991212/partial-sum-of-the-harmonic-series-between-two-consecutive-fibonacci-numbers" }
# Matrix Multiplication
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
I have therefore written a matrix vector multiplication example that needs 13 seconds to run (5 seconds with. CUDA matrix multiplication with CUBLAS and Thrust. Important: We can only multiply matrices if the number of columns in the first matrix is the same as the number of rows in the second matrix. Specifically, If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation). Multiplication. In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. If our equation has two variables, there can be infinitely many combinations of numbers that would work. Learn: In this article, we will see how to perform matrix multiplication in python. The manual method of multiplication procedure involves a large number of calculations especially when it comes to higher order of matrices, whereas a program in C can carry out the operations with short, simple and understandable codes. In other words, To multiply an m×n matrix by an n×p matrix, the ns must be the same, and the result is an m×p matrix. In my last Thank You post, I suggested that Matrix multiplication is not Excel’s forte. \end{align*} Although it may look confusing at first, the process of matrix-vector multiplication is actually quite simple. The matrix product is designed for representing the composition of linear maps that are represented by matrices. Matrix multiplication by a scalar: First of all, I really want to say thank you very much for taking the time to help me understand this. Matrix multiplication is a row-by-column multiplication where each element of one matrix is multiplied by every element of another matrix. To multiply one matrix with another you need to do a dot product of rows and columns. (Here, $\operatorname{diag}$ is an operator that creates a column vector
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
of rows and columns. (Here, $\operatorname{diag}$ is an operator that creates a column vector out of a matrix's main diagonal. The first matrix must have the same number of columns as the second matrix has rows. That’s all about mutliplying two matrices in java. To multiply matrices, you'll need to multiply the elements (or numbers) in the row of the first matrix by the elements. One question: Why is the column of the first (left-hand) matrix colored red, along with the row of the second (right-hand) matrix?. Matrix multiplication in C. com - id: 799dc5-YmUxN. Figure 5: Our Neural Network, with indexed weights. Good question! The main reason why matrix multiplication is defined in a somewhat tricky way is to make matrices represent linear transformations in a natural way. Matrices can be multiplied by scalar constants in a similar manner to multiplying any number of variable by a scalar constant. Multiplying trans1 by trans2 is not the same as multiplying trans2 by trans1. When working with matrices, we can perform a number of matrix operations including matrix multiplication. Matrix Multiplication Calculator multiply matrices online. Your text probably gave you a complex formula for the process, and that formula probably didn't make any sense to you. Matrix Multiplication. As an example you'll be able to solve a series of simultaneous linear equations using Mathcad’s. This example contains a high-performance implementation of the fundamental matrix multiplication operation and demonstrates optimizations that can be described in Open Computing Language (OpenCL TM) to achieve significantly improved performance. Yes, it wll give you a 2xx1 matrix! When you consider the order of the matrices involved in a multiplication you look at the digits at the extremes to "see" the order of the result. The expressions to the right of the equals sign show how the new x, y and z values are calculated after the vector has been transformed. By Rob Hochberg Shodor, Durham, North
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
z values are calculated after the vector has been transformed. By Rob Hochberg Shodor, Durham, North Carolina This module teaches: Matrix multiplication in the context of enumerating paths in a graph. So we have to be very careful about multiplying matrices. OpenGL 101: Matrices - projection, view, model Posted on May 22, 2013 by Paul. Optimization Techniques for Small Matrix Multiplication Charles-Eric Drevet Ancien el eve, Ecole polytechnique, Palaiseau, France [email protected] Scalar Multiplication: Product of a Scalar and a Matrix There are two types or categories where matrix multiplication usually falls under. Before you can even attempt to perform matrix multiplication, you must be sure that the last dimension of the first matrix is the same as the first dimension of the second matrix. The initial attempt to evaluate the f(A) would be to replace every x with an A to get f(A) = A 2 - 4A + 3. For this I tried to follow the xapp1170 which includes a tutorial for the ZC702 board using Planahead. MulT() with w as 0 //fbx sdk forced w to 1 and then multiplied Then I gave up and just manually zeroed out the matrix's T. You can only multiply two matrices if their dimensions are compatible, which means the number of columns in the first matrix is the same as the number of rows in the second matrix. The first one is called Scalar Multiplication, also known as the “Easy Type“; where you simply multiply a number into each and every entry of a given matrix. So this right over here has two rows and three columns. Multiplication Factor appears in the Matrix Table for the new entrants at the entry level. I have therefore written a matrix vector multiplication example that needs 13 seconds to run (5 seconds with. What are synonyms for matrix multiplication?. We’ve seen so far some divide and conquer algorithms like merge sort and the Karatsuba’s. Multiplication Here is a list of all of the skills that cover multiplication! These skills are organized by grade, and you can
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
of all of the skills that cover multiplication! These skills are organized by grade, and you can move your mouse over any skill name to preview the skill. Chandler Burfield APSP with Matrix Multiplication March 15, 2013 3 / 19. You must know which of the two matrices will be to the right (of your multiplication) and which one will be to the left; in other words, we have to know whether we are asked to perform or. Matrices are multiplied by the system shown below. The syntax for the function is:. Let's see it with an example where you are trying to multiply a 3X3 matrix with a 3X2 matrix. Multiplying matrices is a little more complex than the operations you've seen so far. Start studying Matrix multiplication. Matrix Multiplication. 3x3 Matrix Multiplication Calculator. First let's make some data: # Make some data a = c(1,2,3) b = c(2,4,6) c = cbind(a,b) x = c(2,2,2) If we look at the output (c and x), we can see that c is a 3x2…. If you know how to multiply two matrices together, you're well on your way to "dividing" one matrix by another. The algorithm follows directly from the definition of matrix multiplication. We can only multiply two matrices if their dimensions are compatible, which means the number of columns in the first matrix is the same as the number of rows in the second matrix. As an example you'll be able to solve a series of simultaneous linear equations using Mathcad’s. This scalar multiplication of matrix calculator can help you calculate the multiplication between a scalar and a matrix no matte of its type (having from 1 to 4 columns and/or rows). In mathematics, a matrix (plural: matrices) is a rectangle of numbers, arranged in rows and columns. Multiplication of Matrices. The first matrix must have the same number of columns as the second matrix has rows. There is nothing fundamentally di erent between the matrix multiplies that we need to compute at this level relative to our original problem. 2 Linear Substitutions and Matrix Multiplication Or
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
at this level relative to our original problem. 2 Linear Substitutions and Matrix Multiplication Or we can use equations (1) and (4) to go by way of x, u = Ax = A(Ep) = (AE)p We therefore conclude, without doing any real work, that AE = DC, that is,. Now the way that us humans have defined matrix multiplication, it only works when we're multiplying our two matrices. –Use SVD to ensure this property. Example: a matrix with 3 rows and 5 columns can be added to another matrix of 3 rows and 5 columns. The Mailman algorithm: a note on matrix vector multiplication Edo Liberty ⁄ Computer Science Yale University New Haven, CT Steven W. In general, a matrix is just a rectangular array or table of numbers. Multiplying matrix is one of the tedious things that we have done in schools. It's a visualization of the matrix multiplication algorithm. As demonstrated above, in general AB ≠BA. This tool for multiplying 3x3 matrices. \end{align*} Although it may look confusing at first, the process of matrix-vector multiplication is actually quite simple. Hanrahan / Understanding the Efciency of GPU Algorithms for Matrix-Matrix Multiplication plications and must run efciently if GPUs are to become a. An interactive matrix multiplication calculator for educational purposes. Vectors are commonly used in matrix multiplication to find a new point resulting from an applied transformation. This array function returns the product of two matrices entered in a worksheet. Ready to execute code with proper output. What does matrix multiplication mean? Here's a few common intuitions: 1) Matrix multiplication scales/rotates/skews a geometric plane. 2 Linear Substitutions and Matrix Multiplication Or we can use equations (1) and (4) to go by way of x, u = Ax = A(Ep) = (AE)p We therefore conclude, without doing any real work, that AE = DC, that is,. Matrix Arithmetics under NumPy and Python. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Matrix multiplication in C. On this page you can see many examples of matrix multiplication. Let's see it with an example where you are trying to multiply a 3X3 matrix with a 3X2 matrix. Matrix Multiplication. Posts about Matrix Multiplication written by Sean. by Marco Taboga, PhD. ©M F2 n0M1p2o XKKuUtHaw qS xo xfFtKwxa OrKeD aLNLiC M. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications. The multiplication of a matrix A by a matrix B to yield a matrix C is defined only when the number of columns of the first matrix A equals the number of rows of the second matrix B. This is a matrix multiplication utility I developed as a part of my project work at college. 1 Matrix Addition and Scalar Multiplication 175 According to the labeling convention, the entries of the matrix A above are A = a 11 a 12 a 13 a 21 a 22 a 23 In general, the m ×n matrix A has its entries labeled as follows:. Following normal matrix multiplication rules, a (n x 1) vector is expected, but I simply cannot find any. Synonyms for matrix multiplication in Free Thesaurus. •We provide a new hybrid parallel algorithm for shared-memory fast matrix multiplication. How to perform scalar matrix multiplication in C programming. Device Memories and Matrix Multiplication 1 Device Memories global, constant, and shared memories CUDA variable type qualifiers 2 Matrix Multiplication an application of tiling. matrices are equal when each corresponding element is equal. Good question! The main reason why matrix multiplication is defined in a somewhat tricky way is to make matrices represent linear transformations in a natural way. X D dM2aVd6eg tw wiTt Qhi BIqn Vfji on aift7e o iA Slig YeRb ArWad U2z. Matrix Multiply, Power Calculator Solve matrix multiply
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
Vfji on aift7e o iA Slig YeRb ArWad U2z. Matrix Multiply, Power Calculator Solve matrix multiply and power operations step-by-step. JAMA is a basic linear algebra package for Java. We can only multiply two matrices if their dimensions are compatible, which means the number of columns in the first matrix is the same as the number of rows in the second matrix. C Program example of Matrix Chain multiplication. This article comprises matrix multiplication program written in python with Sample Input and Sample Output. Free matrix calculator - solve matrix operations and functions step-by-step. Matrix multipli. Abstract: We implement a promising algorithm for sparse-matrix sparse-vector multiplication (SpMSpV) on the GPU. Matrix multiplication is the most expensive operation involved Number of computations to be performed for matrix multiplication with Givens matrix for ith column: 6(n - i + 1). MulT() with w as 0 //fbx sdk forced w to 1 and then multiplied Then I gave up and just manually zeroed out the matrix's T. Lecture2 MatrixOperations • transpose, sum & difference, scalar multiplication • matrix multiplication, matrix-vector product • matrix inverse. The current matrix is determined by the current matrix mode (see glMatrixMode). Moreover, it computes the power of a square matrix, with applications to the Markov chains computations. Write a C program to read elements in a matrix and perform scalar multiplication of matrix. Today, we take a step back from finance to introduce a couple of essential topics, which will help us to write more advanced (and efficient!) programs in the future. Matrix multiplication has significant application in the areas of graph theory, numerical algorithms, signal. It is important to realize that you can use "dot" for both left ‐ and right ‐ multiplication of vectors by matrices. Before you can even attempt to perform matrix multiplication, you must be sure that the last dimension of the first matrix is the same as the first dimension of
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
you must be sure that the last dimension of the first matrix is the same as the first dimension of the second matrix. Multiply two matrices together. Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. If we grab a matrix from a previous section, this can be easily explained. Chapter 1 Matrix Multiplication 1. Graphing calculators such as the TI83 and TI84 are able to do many different operations with matrices, including multiplication. If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. its determinant. Multiplication of a matrix by a scalar. Banded Matrix-Vector Multiplication. Matrix multiplication in C: We can add, subtract, multiply and divide 2 matrices. An output of 3 X 3 matrix multiplication C program: Download Matrix multiplication program. Random Synchronization Up: Experiments Previous: Double Loops. Banky Craig C. To multiply a row vector by a column vector, the row vector must have as many columns as the column vector has rows. 0 in the MinGW suite) would use the C11 standard by default, which I realised after I read the documentation. We can multiply a matrix with a number (also called a scalar). For some reason, the following brute force approach is faster by about 10%:. This exercise surprised me a little bit. multMatrixes that will. Use commas or spaces to separate values in one matrix row and semicolon or new line to separate different matrix rows. Apart from "Matrix Multiplication Worksheet Answers" i f you need any other stuff in math, please use our google custom search here. Now the way that us humans have defined matrix multiplication, it only works when we're multiplying our two matrices. Device Memories and Matrix Multiplication 1 Device Memories global, constant, and shared memories CUDA variable type
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
and Matrix Multiplication 1 Device Memories global, constant, and shared memories CUDA variable type qualifiers 2 Matrix Multiplication an application of tiling. Resources to help you Teach Matrix Multiplication Worksheet, Bell Work, Exit Quiz, Power Point, Guided Notes, and much more!. Non-square matrices do not have inverses. Take free online matrix multiplication classes to improve your skills and boost your performance in school. An online Matrix calculation. ppt Loading…. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. We say a matrix is m n if it has m rows and n columns. CUDA Programming Guide Version 1. The authors [9, 10] designed a hybrid matrix format, HYB (Hybrid of ELL and. Unlike the other two kinds of multiplication, the cross product is only defined for three-dimensional vectors. An interactive matrix multiplication calculator for educational purposes. The transpose of a matrix is a new matrix whose rows are the columns of the original. facebook twitter linkedin pinterest. Is it true and under what conditions? ADD: Trying to recreate the answer in R, wh. multiply(a, b) or a * b. For some matrices A and B,wehaveAB =BA. Suppose you have two groups of vectors: $\{a_1, \dots, a_m\}$ and [math]\{b_1, \dots. 36 Strassen’s method until a predetermined cutoff size of the seven sub-matrices, af-ter which Winograd’s algorithm takes over. MATRIX_A: An array of INTEGER, REAL, COMPLEX, or LOGICAL type, with a rank of one or two. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Learn vocabulary, terms, and more with flashcards, games, and other study tools. You can use fractions for example 1/3. The columns of the first matrix must be equal to the rows in the second matrix. There is a condition to u and v, namely that they are linearly independent. With no parentheses, the order of operations is left to right so A*B is calculated first,
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
With no parentheses, the order of operations is left to right so A*B is calculated first, which forms a 500-by-500 matrix. Multiplication Factor appears in the Matrix Table for the new entrants at the entry level only and not for the existing employees. If your data is in column-major order, you can tell MPSMatrixMultiplication to transpose the matrix before doing the multiplication. In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring. Note: Matrices multiplication is possible only when the number of columns of first matrix is equal to the number of rows of second matrix. Ready to execute code with proper output. There is one slight problem, however. One way to look at it is that the result of matrix multiplication is a table of dot products for pairs of vectors making up the entries of each matrix. , a movie), is typically very sparse…. When working with matrices, we can perform a number of matrix operations including matrix multiplication. Now the matrix multiplication is a human-defined operation that just happens-- in fact all operations are-- that happen to have neat properties. Below is a program on Matrix Multiplication. Stormy Attaway, in Matlab (Second Edition), 2012. Douglasz April 23, 2001 Abstract: Routines callable from FORTRAN and C are described which implement matrix{matrix. Abstract: This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic constructions also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication. When multiplying matrices together, the dimensions of the matrices to be multiplied must be compatible. Matrix Multiplication in Java. Grey Ballard, Aydin Buluc, James Demmel, Laura Grigori, Benjamin Lipshitz, Oded Schwartz, Sivan Toledo Jul. matrix multiplication in c free download. October 12, 2002 MULTIPLICATION
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
Sivan Toledo Jul. matrix multiplication in c free download. October 12, 2002 MULTIPLICATION MATRIX The history of this matrix goes back to the ‘70’s when my wife and I operated an individual learning. We're considering element-wise multiplication versus matrix multiplication. Clusters use in many scientific. Multiplying matrices is a little more complex than the operations you've seen so far. Matrices can be multiplied by scalar constants in a similar manner to multiplying any number of variable by a scalar constant. \end{align*} Although it may look confusing at first, the process of matrix-vector multiplication is actually quite simple. Here you can perform matrix multiplication with complex numbers online for free. Matrix chain multiplication (or Matrix Chain Ordering Problem, MCOP) is an optimization problem that to find the most efficient way to multiply given sequence of matrices. We need to tag the map( ) function output with the position so the reduce( ) function can identify the components in the different vectors. This forum may not be the best place for a discussion of the many issues involved in performance number-crunching, but I'd very much appreciate comments, suggestions, etc. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Original value ranges are included on the X-axis. Parallel matrix multiplication • Assume p is a perfect square • Each processor gets an n/√p × n/√p chunk of data • Organize processors into rows and columns. For example X = [[1, 2], [4, 5], [3, 6]] would represent a 3x2 matrix. Don't show me this again. , of a matrix. Matrix Multiplication --- Row and Column Picture. The Wolfram Language's matrix operations handle both numeric and symbolic matrices, automatically accessing large numbers of highly efficient algorithms. This tool for multiplying 3x3 matrices. Douglasz April 23, 2001 Abstract: Routines callable from FORTRAN and C are described which implement
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
Douglasz April 23, 2001 Abstract: Routines callable from FORTRAN and C are described which implement matrix{matrix. Shruti Kaushik. Multiplication without tiling. After matrix multiplication the prepended 1 is removed. Matrix Multiplication in Excel with the MMULT function You can multiply matrices in Excel thanks to the MMULT function. These properties include the associative property, distributive property, zero and identity matrix property, and the dimension property. The second matrix is Be and it is 3x18x2 and the third matrix is del matrix and its. This subprogram takes two matrices as parameters and returns their matrix product. Below are the common core standards dealing with basic multiplication. Matrices are frequently used in programming and are used to represent graph data structure, in solving a system of linear equations and have many other applications. This lecture introduces matrix multiplication, one of the basic algebraic operations that can be performed on matrices. Hi, I want to create a VI that performs matrix multiplication for two input matrices A and B. • Given some matrices to multiply, determine the best order to multiply them so you minimize the number of single element multiplications. A matrix is a rectangular array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. multMatrixes that will. We need to check this condition while implementing code without ignoring. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Matrix multiplication. Numpy Matrix Multiplication - Hackr. Matrices to be multiplied do not need to be of the same order, by definition the number of columns of the first matrix must equal the number of rows of the second matrix, otherwise all row elements of the first matrix could not be multiplied by a. Sparse Matrix Multiplication on a Field-Programmable Gate Array A Major Qualifying
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
multiplied by a. Sparse Matrix Multiplication on a Field-Programmable Gate Array A Major Qualifying Project Report submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): In this paper we present a SIMD algorithm for n n matrix multiplication on a hypercube of p processors, with time complexity of O( p =3 ), and 2 < 3. Many states have gone to a group of consistent key standards for each grade level. Scalar in which a single number is multiplied with every entry of a matrix ; Multiplication of an entire matrix by another entire matrix For the rest of the page, matrix multiplication will refer to this second category. The program follows a basic encryption algorithm that relies on mathematical properties of matrices, such as row operations, matrix multiplication, and invertible matrices. Given a sequence of matrices, find the most efficient way to multiply these matrices together. When multiplying matrices, we first need to ensure that the matrices have the same dimensions, which is the number of rows times the number of columns. In this paper, we show that novel fast matrix multiplication algorithms can significantly outperform vendor implementations of the classical algorithm and Strassen's fast algorithm on modest problem sizes and shapes. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. October 12, 2002 MULTIPLICATION MATRIX The history of this matrix goes back to the ‘70’s when my wife and I operated an individual learning. Matrix Multiplication. First let's make some data: # Make some data a = c(1,2,3) b = c(2,4,6) c = cbind(a,b) x = c(2,2,2) If we look at the output (c and x), we can see that c is a 3x2…. Study guide and practice problems on 'Matrix multiplication'. We want to define addition of matrices of the same size, and multiplication of certain "compatible" matrices. In this notebook, we'll
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
of the same size, and multiplication of certain "compatible" matrices. In this notebook, we'll be using Julia to investigate the efficiency of matrix multiplication algorithms. How to Multiply Matrices Faster. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. In the interests of understanding the underlying properties of the images I’m using as stimuli, I’ve been trying to learn more about the matrix transformations commonly used for image compression and image manipulation. \end{align*} Although it may look confusing at first, the process of matrix-vector multiplication is actually quite simple. The size of the matrix, as a block, is defined by the number of Rows and the number of Columns. The applications of matrices often involve the multiplication of two matrices, which requires rules for combination of the elements of the matrices. Recent posts. Stormy Attaway, in Matlab (Second Edition), 2012. Matrix Multiplication in Excel with the MMULT function You can multiply matrices in Excel thanks to the MMULT function. To multiply two matrices in C++ Programming, first ask to the user to enter the two matrix, then start multiplying the two matrices and store the multiplication result inside any variable say sum and finally store the value of sum in the third matrix say mat3. Before to this you can check different types of arrays in java and get to know how to declare and define the arrays and also get practice with adding two matrices. In other words, To multiply an m×n matrix by an n×p matrix, the ns must be the same, and the result is an m×p matrix. split happened at only one matrix which requires zero multiplications). Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication Using Compressed Sparse Blocks Aydın Buluc¸∗ [email protected] You probably know what a matrix is already if you are interested in
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
Aydın Buluc¸∗ [email protected] You probably know what a matrix is already if you are interested in matrix multiplication. For math, science, nutrition, history. I have therefore written a matrix vector multiplication example that needs 13 seconds to run (5 seconds with. Doerr 2 the previous seating chart example use a 1 (or yes) if the seat is occupied and a 0 (or no ) if the seat is unoccupied. Here we discuss the properties in detail. A and B must either be the same size or have sizes that are compatible (for example, A is an M-by-N matrix and B is a scalar or 1-by-N row vector). Multiplication of a matrix by a scalar. A matrix is a rectangular array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Before we go much farther, if you don’t know how matrix multiplication works, then check out Khan Academy spend the 7 minutes, then work through an example or two and make sure you have the intuition of how it works. Using Doceri). As demonstrated above, in general AB ≠BA. 376 Coppersmith-Winograd (1990) n2. The efficiency of matrix multiplication is a popular research topic given that matrices compromise large data in computer applications and other fields of study. Multiplying matrices - examples. We will illustrate matrix multiplication or matrix product by the following example. Take free online matrix multiplication classes to improve your skills and boost your performance in school. I did not expect that gcc (GCC 6. The definition of matrix multiplication indicates a row-by-column multiplication, where the entries in the i th row of A are multiplied by the corresponding entries in the j th column of B and then adding the results. Springer LNCS, 1984. Python Matrix Multiplication Program - here you will learn how to multiply one matrix to another matrix and print the multiplication result of the third matrix in python. It is built deeply into the R language. Furthermore, you can apply matrix operations
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
in python. It is built deeply into the R language. Furthermore, you can apply matrix operations such as addition, subtraction, multiplication and division:. Matrix multiplication means multiplications of two different arrays, in excel we have an inbuilt function for matrix multiplication and it is MMULT function, it takes two arrays as an argument and returns the product of two arrays, given that both the arrays should have the same number of rows and the same number of columns. One question: Why is the column of the first (left-hand) matrix colored red, along with the row of the second (right-hand) matrix?. MATMUL can do this for a variety of matrix sizes, and for different arithmetics (real, complex, double precision, integer, even logical!). Multiplication of a matrix by another matrix. Important: We can only multiply matrices if the number of columns in the first matrix is the same as the number of rows in the second matrix. When multiplying matrices, we first need to ensure that the matrices have the same dimensions, which is the number of rows times the number of columns. Join GitHub today. Row 1 X Column 1 Row 1 X Column 1 Row 1 X Column 1 Row 1 X Column 2 Row 1 X Column 2 Row 1 X – A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow. Just type matrix elements and click the button. Here you can perform matrix multiplication with complex numbers online for free. Just like addition works only for matrices of the same size, there are conditions for when two matrices can be multiplied but in this case it is a little bit more complicated. With no parentheses, the order of operations is left to right so A*B is calculated first, which forms a 500-by-500 matrix. To do so, we are taking input from the user for row number, column number, first matrix elements and second matrix elements. If you want to try to multiply two matrices (x and y) by each other, you'll need to make sure that the number of columns in x is equal to the number of
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
y) by each other, you'll need to make sure that the number of columns in x is equal to the number of rows in y, otherwise the equation won't work properly. As an example you'll be able to solve a series of simultaneous linear equations using Mathcad’s. In recommendation systems, the user/item rating matrix, which shows the extent of how much a user likes an item (e. Matrix algebra for beginners, Part I matrices, determinants, inverses Jeremy Gunawardena Department of Systems Biology Harvard Medical School 200 Longwood Avenue, Cambridge, MA 02115, USA. Matrix multiplication is the "messy type" because you will need to follow a certain set of procedures in order to get it right. Matrix multiplication in C. Producing a single matrix by multiplying pair of matrices (may be 2D / 3D) is called as matrix multiplication which is the binary operation in mathematics. Here is how it works 1) 2-D arrays, it returns normal product 2) Dimensions > 2, the product is trea. The manual method of multiplication procedure involves a large number of calculations especially when it comes to higher order of matrices, whereas a program in C can carry out the operations with short, simple and understandable codes. Strassen’s Matrix Multiplication on GPUs Junjie Li Sanjay Ranka Sartaj Sahni fjl3, ranka, [email protected] The MMULT function returns the matrix product of two arrays. Just like numbers and equations, you are expected to be able to manipulate matrices and perform arithmetic on multiple numbers of matrices. ) This computation requires $3n^2$ operations, while the operation count for the full matrix product is $4n^3$. 2 in the most recent edition (6e) of Finite Mathematics and Section 4. Hi everyone I am new to this site and also new to programming world can anybody help me writing C code for matrix multiplication (without using pointers). Below are the common core standards dealing with basic multiplication. If both are vectors it will return the inner product. A matrix is just a
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
with basic multiplication. If both are vectors it will return the inner product. A matrix is just a two-dimensional group of numbers. He told me about the work of Jacques Philippe Marie Binet (born February 2 1786 in Rennes and died Mai 12 1856 in Paris), who seemed to be recognized as the first to derive the rule for multiplying matrices in 1812. Pupils use the interactive to create a matrix multiplication problem using vectors. Each element in the result matrix C is the sum of element-wise multiplication of a row from A and a column from B. One way to look at it is that the result of matrix multiplication is a table of dot products for pairs of vectors making up the entries of each matrix. Operation with Matrices in Linear Algebra. One question: Why is the column of the first (left-hand) matrix colored red, along with the row of the second (right-hand) matrix?. For example weather forecasting has to done in. An efficient k-way merge lies at the heart of finding a fast parallel SpMSpV algorithm. Matrix addition, multiplication, inversion, determinant and rank calculation, transposing, bringing to diagonal, triangular form, exponentiation, solving of systems of linear equations with solution steps. Asking why matrix multiplication isn't just componentwise multiplication is an excellent question: in fact, componentwise multiplication is in some sense the most "natural" generalization of real multiplication to matrices: it satisfies all of the axioms you would expect (associativity, commutativity, existence of identity and inverses (for matrices with no 0 entries), distributivity over. Matrix Multiply, Power Calculator Solve matrix multiply and power operations step-by-step. Matrix multiplication is likely to be a source of a headache when you fail to grasp conditions and motives behind them. , Determine the way the matrices are fully parenthesized. Matrix Chain multiplication and algorithmic solution. If both arguments are 2-D they are multiplied like conventional
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
and algorithmic solution. If both arguments are 2-D they are multiplied like conventional matrices. MATRIX_A: An array of INTEGER, REAL, COMPLEX, or LOGICAL type, with a rank of one or two. Compton LA, Johnson WC Jr. plain old numbers like 3, or -5. This VHDL project is aimed to develop and implement a synthesizable matrix multiplier core, which is able to perform matrix calculation for matrices with the size of 32x32. This algebra lesson explains how to do scalar multiplication - and explains what a scalar is. We will illustrate matrix multiplication or matrix product by the following example. Matrix Formulas. It doesn ’ t just give you the answer the way your calculator would, but will actually show you the "long hand" way to multiply two numbers. Learn: In this article, we will see how to perform matrix multiplication in python. The Wolfram Language's matrix operations handle both numeric and symbolic matrices, automatically accessing large numbers of highly efficient algorithms. But to multiply a matrix by another matrix we need to do the "dot product" of rows and columns what does that mean?. Lecture Notes CMSC 251 Lecture 26: Chain Matrix Multiplication (Thursday, April 30, 1998) Read: Section 16. Using this online calculator, you will receive a detailed step-by-step solution to your problem, which will help you understand the algorithm how do matrix multiplication. Please upload a file larger than 100x100 pixels; We are experiencing some problems, please try again. Linear Algebra¶. Related Posts. Hoare (quoted by Donald Knuth). 3x3 Matrix Multiplication Calculator. Matrix Multiplication,definition,2 D array in C,Multidimensional array in C,Syntax,Syntax Example,Matrix Multiplication 2 D (dimensional) or Multidimensional Array Example Program In C. However, even when matrix multiplication is possible in both directions, results may be different. In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). This matrix multiplication calculator help you understand how to do matrix multiplication. Multiplication without tiling. Here’s a fact that has been rediscovered many times in many different contexts: The way you parenthesize matrix products can greatly change the time it takes to compute the product. Matrix Multiplication using arrays is very basic practice to learn for beginners to understand the concept of multidimensional matrix. Matrix multiplication is likely to be a source of a headache when you fail to grasp conditions and motives behind them. The matrix A is an n x m matrix and matrix B is an m x p matrix. Each element in the result matrix C is the sum of element-wise multiplication of a row from A and a column from B. The behavior depends on the arguments in the following way. Study guide and practice problems on 'Matrix multiplication'.
{ "domain": "auit.pw", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462219033657, "lm_q1q2_score": 0.8511534225450522, "lm_q2_score": 0.8615382129861583, "openwebmath_perplexity": 562.5575512182271, "openwebmath_score": 0.7270029187202454, "tags": null, "url": "http://vuut.auit.pw/matrix-multiplication.html" }
# Uniqueness of unitarily upper triangular matrix Suppose A is a matrix in $\mathbb{C}^{n\times n}$ with n distinct eigenvalues $\lambda_1,\dots,\lambda_n$. Then by Schur's theorem, for any fixed order of $\lambda_1,\dots,\lambda_n$, we know there exists an unitary matrix $U$ s.t. $U^*AU$ is an upper triangular matrix with $\lambda_1,\dots,\lambda_n$ of required order on the diagonal. The question is is $U$ unique? If not, what freedom do we have to choose U? I know how to solve $A$ is unitarily diagonal (not unitarily upper triangular), then $U^*AU=\,\text{diag}(\lambda_i)\iff AU=U\,\text{diag}(\lambda_i)=[\lambda_1U_1,\dots,\lambda_nU_n]$. Then ith column of $U$ must be an eigenvector of $\lambda_i$ and $|U_i|=1$. Therefore we can choose $U$ up to multiplying a diagonal matrix whose diagonal entries have norm 1. But this method seems not fit the unitarily upper triangular case.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462179994595, "lm_q1q2_score": 0.8511534209378229, "lm_q2_score": 0.8615382147637196, "openwebmath_perplexity": 100.74582408207894, "openwebmath_score": 0.9731680750846863, "tags": null, "url": "https://math.stackexchange.com/questions/1952939/uniqueness-of-unitarily-upper-triangular-matrix" }
Key fact: if $BT=TB$ and $T$ is upper triangular with distinct diagonal entries, then $B$ is upper triangular (proof below). Now, if $$UTU^*=VTV^*,$$ then $V^*UT=TV^*U$. So $V^*U$ is an upper triangular unitary. As such, it is diagonal. Thus, $V$ is of the form $DU$ with $D$ diagonal and $|D_{kk}|=1$. In other words, the situation you observed for diagonal $A$ still occurs in general (it is essential that the diagonal entries are distinct). Proof that $TB=BT$ implies $B$ diagonal. Consider the $n,1$ entry: $$(TB)_{n1}=\sum_kT_{nk}B_{k1}=T_{nn}B_{n1},$$ while $$(BT)_{n1}=\sum_kB_{nk}T_{k1}=B_{n1}T_{11}.$$ As $T_{11}\ne T_{nn}$, we deduce that $B_{n1}=0$. Now consider the $n,2$ entry: $$(TB)_{n2}=\sum_kT_{nk}B_{k2}=T_{nn}B_{n2},$$ while $$(BT)_{n2}=\sum_kB_{nk}T_{k2}=B_{n1}T_{12}+B_{n2}T_{22}=B_{n2}T_{22}.$$ As $T_{22}\ne T_{nn}$, we get that $B_{n2}=0$. Continuing inductively, after showing that $B_{n1},\ldots,B_{nr}=0$, we have $$(TB)_{n,r+1}=\sum_kT_{nk}B_{k,r+1}=T_{nn}B_{n,r+1},$$ while $$(BT)_{n,r+1}=\sum_kB_{nk}T_{k,r+1}=\sum_{k=1}^{r+1}B_{nk}T_{k,r+1}=B_{n,r+1}T_{r+1,r+1}.$$ As $T_{r+1,r+1}\ne T_{nn}$, we get that $B_{n,r+1}=0$. Now start doing the same with the $n-1,1$ entry, then $n-1,2$, etc.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462179994595, "lm_q1q2_score": 0.8511534209378229, "lm_q2_score": 0.8615382147637196, "openwebmath_perplexity": 100.74582408207894, "openwebmath_score": 0.9731680750846863, "tags": null, "url": "https://math.stackexchange.com/questions/1952939/uniqueness-of-unitarily-upper-triangular-matrix" }
# Show that $\sum^\infty_{k=0} \frac{2(-1)^k}{(2k+1)\pi\cosh[(2k+1)\pi/2]}=1/4$ Title says it all. Well, maybe some backstory. Flipping through my past notebooks, I found this: $$\vdots$$ $$= \sum^\infty_{k=0} \frac{2(-1)^k}{(2k+1)\pi\cosh[(2k+1)\pi/2]}\\=\frac14\quad(?)$$ [end of page] Ah, yes. My engineering numerical analysis professor gave me one homework problem: Make a contour plot of the steady-state temperature distribution of a square plate, with the temperature of one of its side maintained at $1$ unit, the other three sides maintained at $0$ units. Steady-state temperature distribution $T$ follows $\nabla^2T=0$. I was calculating the temperature of the center of the plate analytically to sanity-check my program output, when I encountered this weird-looking sum. I remember I made a spreadsheet to numerically evaluate the sum, and Excel said 0.2500000... So I jot down $\frac14$ with a question mark beside it. The next page is filled with my fruitless attempts to find out the exact sum, to convince myself that the sum is exactly $1/4$. I posted a bounty on my social media accounts, whoever can show the proof (or disproof) gets free beer. After a few days, the bounty was still unclaimed, but I realized, "duuuuuhhhh! I could just exploit the symmetry. If I rotate the plate $90^\circ$ three times and superpose all the temperature distribution with the original one, temperature is $1$ unit everywhere. By linearity and symmetry, the temperature of the center point is therefore $1/4$." I treated myself a few rounds of beer, and moved on. Case closed. Looking again at this, I now realize that I might be missing out on some summation tactics I never knew. Anybody can show me how is this equal to $1/4$ just by algebraic manipulation?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 457.59766902816517, "openwebmath_score": 0.884986937046051, "tags": null, "url": "https://math.stackexchange.com/questions/2095489/show-that-sum-infty-k-0-frac2-1k2k1-pi-cosh2k1-pi-2-1-4/2096064" }
Anybody can show me how is this equal to $1/4$ just by algebraic manipulation? • +1 for the story, +1 more for the solution, and +1 even more for trying to learn greater things. – Simply Beautiful Art Jan 12 '17 at 23:22 • The residue theorem might come in useful. – Michael M Jan 13 '17 at 0:23 • Thanks for the suggestion and the upvote. Probably will take me a while to work that out, as I never took a course in analysis. But now I realize that math is wonderful, there are always more beautiful things to learn. – f1garo Jan 13 '17 at 1:25 • – Math-fun Jan 13 '17 at 9:49 I am busy as hell today but this is such a nice problem (and amusingly posed question) i can't resist... :) The key idea here is simple: Look for a function in the complex plane $f(z)$ which has poles at the correct values $z_n$ and integrate it over an appropriate contour $C$. It turns out that the function (which is also the simplest guess i can imagine) $$f(z)=\frac{1}{z \cos(z)\cosh(z)}$$ is the correct choice. Its poles are given by $z_0=0$, $z_k=\frac{(2 k-1)\pi}{2}$ and $\tilde{z}_k=i \frac{(2 k-1)\pi}{2}$ with $k \in \mathbb{Z/0}$. We furthermore have $$\text{res}(z_0)=1 \\ \text{res}(z_k)=\text{res}(\tilde{z}_k)=\frac{2}{\pi}\frac{(-1)^{k}}{(2k-1)\cosh\left(\pi\frac{2k-1}{2}\right)}\\ \text{res}(z_{-k})=\text{res}(z_k)$$ since $z\cos(z)\cosh(z)\sim |z| e^{a |z|}$ with $\Re(a)>0$ as $|z|\rightarrow\infty$ we can choose a big circle traversed anticlockwise with radius choosen such that we hit no pole of $f(z)$ as the integration contour $C$ (Thanks to @Dr. MV for rigorizing this point). Then, we get by applying the residue theorem, we get in the limit of infinte radius $$\oint_Cf(z)dz=2\pi i\text{res}(z_0)+4\pi i \sum_{k\geq1}\text{res}(z_k)+4\pi i\sum_{k\geq1}\text{res}(\tilde{z}_k)=0$$ or $$8\pi i \sum_{k\geq1}\text{res}(z_k)=-2\pi i$$ which can be rewritten as (after shifting $k\rightarrow k+1$)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 457.59766902816517, "openwebmath_score": 0.884986937046051, "tags": null, "url": "https://math.stackexchange.com/questions/2095489/show-that-sum-infty-k-0-frac2-1k2k1-pi-cosh2k1-pi-2-1-4/2096064" }
which can be rewritten as (after shifting $k\rightarrow k+1$) $$\sum_{k\geq0}\frac{2}{\pi}\frac{(-1)^{k}}{(2k+1)\cosh\left(\pi\frac{2k+1}{2}\right)}=\frac{1}{4}\\ \textbf{Q.E.D.}$$ To make things clearer, here is a sketch of the integration contour $\color{blue}{C}$ and the singularities $\color{red}{z_n}$ • (+1) This is extremely close to Ramanujan original argument, I'd say it is perfectly fine! – Jack D'Aurizio Jan 13 '17 at 12:51 • @JackD'Aurizio thanks! This is a relief...still wondering how this complicated sum can be derived with nearly zero effort – tired Jan 13 '17 at 12:53 • A big (+1). I was working on this last night, had identified $f(z)$ and then floundered to find a suitable contour. Now, I feel stupid. Well done my friend! -Mark – Mark Viola Jan 13 '17 at 14:42 • I was trying to isolate the poles on the real axis by taking a rectangular contour in hopes that the top and bottom halves would cancel. They didn't. I even recognized the symmetry between the residues on real and imaginary axes. For some reason (it was getting late and I was tiring) I just couldn't see the obvious "big circle" contour was the way forward. One note; we must take the limit with discrete radii to ensure the contour doesn't hit a pole. – Mark Viola Jan 13 '17 at 14:55 • In general, the function $\pi \sec(\pi z) f(z)$ can be used to evaluate infinite sums of the form $\sum_{n \in \mathbb{Z}} (-1)^n f(n+1/2)$, while the function $\pi \tan(\pi z) f(z)$ can be used to evaluate infinite sums of the form $\sum_{n \in \mathbb{Z}} f(n+1/2)$. In some cases, of course, the contour integral won't vanish in the limit. – Random Variable Jan 13 '17 at 19:56
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 457.59766902816517, "openwebmath_score": 0.884986937046051, "tags": null, "url": "https://math.stackexchange.com/questions/2095489/show-that-sum-infty-k-0-frac2-1k2k1-pi-cosh2k1-pi-2-1-4/2096064" }
As mentioned by Zucker in The summation of series of hyperbolic functions, in Question 358, J. Indian Math. Soc., 4 (1912), p. 78. Ramanujan proves (through the residue theorem) that $$\sum_{n\geq 1}(-1)^{n+1}(2n-1)^{4m-1}\,\text{sech}\left[(2n-1)\frac{\pi}{2}\right] = 0$$ holds for every $m\geq 0$. Your identity then follows from the $m=0$ case. It is truly remarkable to know that such identity can be proved through Dirichlet's problem about the eigenvalues of the Laplacian operator. Can you be more specific about the relation between such a series and the physical problem?$^{(0)}$ A brilliant piece of math might arise from there. $^{(0)}$Update: I found the connection - such series are related with the Green function of a square. I will show an interesting technique for deriving a similar identity.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 457.59766902816517, "openwebmath_score": 0.884986937046051, "tags": null, "url": "https://math.stackexchange.com/questions/2095489/show-that-sum-infty-k-0-frac2-1k2k1-pi-cosh2k1-pi-2-1-4/2096064" }
I will show an interesting technique for deriving a similar identity. From the Weierstrass product $$\cosh(\pi x/2) = \prod_{m\geq 0}\left(1+\frac{z^2}{(2m+1)^2}\right)\tag{1}$$ by applying $\frac{d^2}{dz^2}\log(\cdot)$ to both sides we get: $$\frac{\pi^2}{8\cosh^2(\pi x/2)}=\sum_{m\geq 0}\frac{(2m+1)^2-z^2}{((2m+1)^2+z^2)}\tag{2}$$ If we replace $z$ with $(2n+1)$ and sum over $n\geq 0$, $$\sum_{n\geq 0}\frac{\pi^2}{8\cosh^2(\pi(2n+1)/2)} = \sum_{n\geq 0}\sum_{m\geq 0}\frac{(2m+1)^2-(2n+1)^2}{((2m+1)^2+(2n+1)^2)^2} \tag{3}$$ where the RHS of $(3)$ can also be written as $$\sum_{n\geq 0}\sum_{m\geq 0}\int_{0}^{+\infty}\cos((2n+1)x)x e^{-(2m+1)x}\,dx = \sum_{n\geq 0}\int_{0}^{+\infty}\frac{x\cos((2n+1)x)}{2\sinh(x)}\,dx\tag{4}$$ or, by exploiting integration by parts, $$\sum_{n\geq 0}\int_{0}^{+\infty}\frac{x\cosh(x)-\sinh(x)}{2\sinh(x)}\cdot\frac{\sin((2n+1)x)}{2n+1}\,dx\tag{5}$$ On the other hand, $\sum_{n\geq 0}\frac{\sin((2n+1)x)}{2n+1}$ is the Fourier series of a $2\pi$-periodic rectangle wave that equals $\frac{\pi}{4}$ over $(0,\pi)$ and $-\frac{\pi}{4}$ over $(\pi,2\pi)$. That implies, by massive cancellation: $$\sum_{n\geq 0}\frac{1}{\cosh^2(\pi(2n+1)/2)}=\frac{1}{2\pi}.\tag{6}$$ On the other hand, the Fourier transform of $\frac{1}{\cosh^2(\pi x)}$ is given by by $\frac{s\sqrt{8\pi}}{\sinh(\pi s)}$. By Poisson's summation formula, $$\sum_{n\geq 1}\frac{(-1)^{n+1}n}{\sinh(\pi n)}=\frac{1}{4\pi}.\tag{7}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 457.59766902816517, "openwebmath_score": 0.884986937046051, "tags": null, "url": "https://math.stackexchange.com/questions/2095489/show-that-sum-infty-k-0-frac2-1k2k1-pi-cosh2k1-pi-2-1-4/2096064" }
• Hey Jack, since i can't read the source you mentioned, could you please check if their derivation fits my approach? i'm quiet unsure if i made no big mistake here (it seems to easy to be true). thanks! – tired Jan 13 '17 at 12:38 • For the pyhsical background you might be interested in this presentation: cedricthieulot.net/diva/05.pdf. The symmetry considerations mentioned by op are essentially saying that: By superposition, adding four plates with one side on a different temperature then the other three (and center temperature given by the complicated sum $S$) gives one plate with constant temperature with core temerature is trivially one. From this $S=1/4$ follows directly – tired Jan 13 '17 at 13:03 • It looks like my predictions were true: sciencedirect.com/science/article/pii/S0955799706000683 – Jack D'Aurizio Jan 13 '17 at 13:12 • @tired: thank you. I always find very pleasing to work on some problem with you. – Jack D'Aurizio Jan 13 '17 at 13:13 • D'Aurizo Yeah, this is a nice proof of concept that physicists and mathematicans CAN work together ;-) – tired Jan 13 '17 at 13:17
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 457.59766902816517, "openwebmath_score": 0.884986937046051, "tags": null, "url": "https://math.stackexchange.com/questions/2095489/show-that-sum-infty-k-0-frac2-1k2k1-pi-cosh2k1-pi-2-1-4/2096064" }
The graph of the function y = ax² + bx + c has a turning point at (-3,2) and passes through the point (0,5). Determine the values of a, b and c. I know this is only a basic question, but I really would appreciate your help. Thanks 2. $y = ax^2 + bx + c = x^2 + \frac{b}{a}x + \frac{c}{a}$ $= x^2 + \frac{b}{a}x + \frac{b^2}{4a^2} + \frac{4ac- b^2}{4a^2} = \left({x +\frac{b}{2a}}\right)^2 + \frac{4ac- b^2}{4a^2}$ so, the turning point is at $\left({-\frac{b}{2a}, \frac{4ac-b^2}{4a^2}}\right)$ plug in and you will have 2 equations.. for the third equation: note: a point $(x_0,y_0)$ is on the curve $y = ax^2 + bx + c$ if and only if $y_0 = ax_0^2 + bx_0 + c$.. 3. Originally Posted by tim_mannire The graph of the function y = ax² + bx + c has a turning point at (-3,2) and passes through the point (0,5). Determine the values of a, b and c. I know this is only a basic question, but I really would appreciate your help. Thanks This aproach should give you the same answer but you will only have to solve a symoltaneous equation in two variables. You want the equation to pass through (0,5) so substitute x=0, y=5 into the equation to get y = 5 = c You want the equation to pass through (-3,2) so substitute x=-3, y=2 into the equation to get $2 = 9*a-3b+5$ Then by knowing that at the turning point the differential of y with respect to x is zero you can write: $y'=0=2*a*x + b$ You have two equations in two variables which can be solved by symoltaneous equations. 4. ## Here it is y=ax^2+bx+c since it passes through (0,5) this eq must satisfy it. So putting the values in the eq we get 5=a(0)^2+b(0)+c.which gives us c=5. Now at turning point x=-b/2a (if u wanna know why is it that so u may ask it in a new thread).since turning point is (-3,2) therefor x=-b/2a=-3 or b=6a. Substituting this value in eq we get y=ax^2+6ax+5 now substituting (-3,2) we get 2=9a-18a+5 or a=1/3 since b=6a therfor b=2.so finaly a=1/3,b=2,c=5
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 588.4145077057931, "openwebmath_score": 0.6726120710372925, "tags": null, "url": "http://mathhelpforum.com/pre-calculus/42374-please-help-me-problem-urgent-thanks.html" }
5. Originally Posted by tim_mannire The graph of the function y = ax² + bx + c has a turning point at (-3,2) and passes through the point (0,5). Determine the values of a, b and c. I would really appreciate any help on this problem, thanks. First use the turning point form y = a (x - h)^2 + k. Obviously h = 0, k = 5. Substitute (-3, 2) to get a. Now expand the turning point form to get the standard form. Edit: Don't double post! It wastes peoples time! http://www.mathhelpforum.com/math-he...nt-thanks.html 6. Originally Posted by tim_mannire The graph of the function y = ax² + bx + c has a turning point at (-3,2) and passes through the point (0,5). Determine the values of a, b and c. I would really appreciate any help on this problem, thanks. Use the following formula: $y = a(x - p)^2 + q$ $y = a(x + 3)^2 + 2$ Passes through the point $(0;5)$ $5 = a(0 + 3)^2 + 2$ $3 = 9a$ $a = \frac{1}{3}$ $y = \frac{1}{3} (x+3)^2 + 2$ EDIT: Oops Mr F beat me to it... by 2 mins 7. Originally Posted by nikhil y=ax^2+bx+c since it passes through (0,5) this eq must satisfy it. So putting the values in the eq we get 5=a(0)^2+b(0)+c.which gives us c=5. Now at turning point x=-b/2a (if u wanna know why is it that so u may ask it in a new thread).since turning point is (-3,2) therefor x=-b/2a=-3 or b=6a. Substituting this value in eq we get y=ax^2+6ax+5 now substituting (-3,2) we get 2=9a-18a+5 or a=1/3 since b=6a therfor b=2.so finaly a=1/3,b=2,c=5 Thank you very much. very well answered. much appreciated!
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462226131666, "lm_q1q2_score": 0.8511534196443029, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 588.4145077057931, "openwebmath_score": 0.6726120710372925, "tags": null, "url": "http://mathhelpforum.com/pre-calculus/42374-please-help-me-problem-urgent-thanks.html" }
# College Board Problem: Conservation of Momentum -> Conservation of Energy in a Spring In my AP Physics C class today, we ran into a problem written by College Board whose answer we disputed. The problem is as such : A block of mass $$M=5.0 \ \mathrm{kg}$$ is hanging at equilibrium from an ideal spring with spring constant $$k=250 \ \mathrm{N/m}$$.An identical block is launched up into the first block. The new block is moving with a speed of $$v=5.0 \ \mathrm{m/s}$$ when it collides with and sticks to the original block. Calculate the maximum compression of the spring after the collision of the two blocks. According to the College Board answer key, the answer is $$0.5 \ \mathrm{m}$$ : $$p_1=p_2$$ $$Mv_0=(M+M)v_2$$ $$v_2=\frac{1}{2}v_0= \left (\frac{1}{2} \right)\left (5.0 \frac{m}{s}\right)$$ $$v_2=2.5 \frac{m}{s}$$ $$K_1 + U_1=K_2+U_2$$ $$\frac{1}{2}mv_1^2 +0=0+\frac{1}{2}kx_2^2$$ $$x_2=\sqrt{\frac{m}{k}}v_1= \sqrt{\frac{(10 \ \mathrm{kg)}}{\left(250 \frac{N}{m}\right)}} \left(2.5 \frac{m}{n}\right)$$ $$x_2=0.50 \ \mathrm{m}=50 \ \mathrm{cm}$$ However, half of us disputed this during class. We argued that, yes, $$U_2$$ includes $$\frac{1}{2}kx^2$$, but it also includes gravitational potential energy at the maximum compression (that is, when it compresses $$x$$ meters from equilibrium, the mass $$M$$ is $$x$$ meters higher above ground). Thus $$K_1+U_1=K_2+U_2$$ is $$\frac{1}{2}mv^2+0=0+\frac{1}{2}kx^2+mgx$$. When $$mgx$$ is included, $$x$$ is $$0.24 \ \mathrm{m}$$, not $$0.5 \ \mathrm{m}$$. My physics teacher reluctantly agreed with College Board but could not give a solid explanation why. He said he would e-mail College Board, but in the meantime, I would very much appreciate any input from people who know the answer.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
• I agree with your approach. It should be 2mgh since the mass rising is two blocks of 5 kg each. – Dan Dec 1, 2021 at 19:56 • Sorry I should have written that lol. When I plugged the numbers in and solved, I did make sure to plug in 10 kg (times 9.81 m/s/s times x). Dec 1, 2021 at 19:57 • Someone in class pointed out that perhaps because the question asks for MAXIMUM compression, they assume gravity does not exist. However, this still doesn't make sense. Assuming gravity does not exist (perhaps it is laid horizontally), the spring would have no elongation. When it is laid vertically, it elongates until the tension of the coil matches Mg. Thus, its equilibrium when horizontal (or simply without gravity) will be the length of the fully compressed spring. If the second block struck it elastically, the spring could not compress anymore. A compression of .5 meters is improbable. Dec 1, 2021 at 20:08 • Hello! It is preferable to type out screenshots or images of text; for formulae, one can use MathJax. Thanks! Dec 1, 2021 at 20:24 • Don't forget to include the initial elongation of the spring. Dec 1, 2021 at 20:36 I guess this is a homework/check-my-work problem, so by the letter of the law I should not answer, but I would argue there is broad interest in solving it correctly given that a supposedly reputable source is presenting an incorrect solution.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
Here is how I would do this. Initially, the spring is stretched distance $$d=mg/k$$ below its equilibrium position. Choose the position of the hanging block as $$y=0$$, so the gravitational potential energy immediately after the collision is zero. In terms of the speed $$v=v_0/2$$ of the blocks after the collision and the mass $$m$$ of one block, the total energy immediately after the collision is \begin{align} E_i &= \frac{1}{2}(2m) v^2 + \frac{1}{2}kd^2\\ &= \frac{1}{4}mv_0^2 + \frac{1}{2}\frac{m^2g^2}{k}. \end{align} Let $$h$$ be the distance of the blocks above the equilibrium position of the spring when the blocks are at their maximum height. At this point, the blocks are at rest, so their total energy is \begin{align} E_f &= \frac{1}{2}kh^2 + 2mg(h + d)\\ &= \frac{1}{2}kh^2 + 2mgh + \frac{2m^2g^2}{k}. \end{align} Using conservation of energy, \begin{align} &\frac{1}{4}mv_0^2 + \frac{1}{2}\frac{m^2g^2}{k} = \frac{1}{2}kh^2 + mgh + \frac{2m^2g^2}{k}\\ \rightarrow &\frac{1}{2}kh^2 + 2mgh + \frac{3}{2}\frac{m^2g^2}{k} - \frac{1}{4}mv_0^2 = 0. \end{align} We can solve this quadratic equation for $$h$$ to obtain \begin{align} h = -\frac{2mg}{k} + \sqrt{\frac{m^2g^2}{k^2} + \frac{mv_0^2}{2k}}. \end{align} In terms of the given numbers, $$v_0 = 5.0\,\text{m}/\text{s}$$, $$m=5.0\,\text{kg}$$, and $$k=250\,\text{N}/\text{m}$$, we get $$\boxed{h=15\,\text{cm}.}$$ Note that if we set $$g=0$$ so that there is no gravity, we get \begin{align} h = \sqrt{\frac{mv_0^2}{2k}} = \boxed{50\,\text{cm}.} \end{align} We are left to conclude that the author of the solution was likely in free-fall at the time of its writing.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
• h=15 cm? I got 0.1447. Is this the same thing? Also, how does your equation differ from the one my teacher and I made, which outputs 24 centimeters? Dec 2, 2021 at 17:23 • @SebastianPojman-Malo When I plugged the numbers into Mathematica, it spit out 14.5 something, and I rounded to two sig figs. In my answer, I'm taking into account the fact that the spring does not start at its equilibrium length. – d_b Dec 2, 2021 at 17:38 • That's interesting. College Board (and I) assume that it absolutely does hit the original block when the spring is at its equilibrium. Dec 2, 2021 at 17:45 • No, that is not correct. The hanging block is in equilibrium, which means the spring must be exerting an upward force on the block to keep it from falling under the influence of gravity. So the spring cannot be at its equilibrium length. – d_b Dec 2, 2021 at 18:54 • I agree with your h = .145 m. Note that that (h) is measured up from the un-stretched position of the spring and not from the point where the collision occurs. Dec 6, 2021 at 17:26 There is already a very good and complete answer from d_b, let me just add a comment to elucidate the possible (erroneous in this case) reasoning by which one could try to forget about gravity. First, consider one block hanging from the spring in equilibrium. Assume that we provide it with the vertical speed $$v$$ without any other block sticking to it, so it moves up alone. What height above the initial position will it reach in such a case? Writing energy conservation (for detailed explanation look at d_b's answer, it is quite the same here - only I choose to measure height relative to the initial position of the block for reasons which will become clear shortly), we get: $$\frac{m v^2}{2} + \frac{m^2 g^2}{2 k} = m g h + \frac{k (h-\frac{mg}{k})^2}{2}$$ Let us play a little with this equation. Open the square in the RHS and observe that $$mgh$$ cancels:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
$$m g h + \frac{k (h-\frac{mg}{k})^2}{2} = mgh + \frac{k(h^2 + \frac{m^2g^2}{k^2} - \frac{2mgh}{k})}{2} = \frac{kh^2}{2} + \frac{m^2 g^2}{2k}$$ Plugging it back to the energy conservation, we observe that $$\frac{m^2 g^2}{2k}$$ cancels as well, and we are left with $$\frac{m v^2}{2} = \frac{kh^2}{2}$$ In other words, it looks exactly like we could forget about gravity and initial elongation of the spring altogether: the answer is the same as it would have been for a horizontal spring. This is not a coincidence: what we have derived means that the total (gravitational+elastic) potential energy of this system is quadratic in deviation from the equilibrium, so it really looks like for a horizontal spring. One could also understand it graphically: potential energy of the spring is a parabola with the equilibrium position at the lowest point. If we also add gravity, a linear function is added to this parabola. The result is another parabola with the same shape but another minimum position - at the equilibrium point of the hanging block (equilibrium is always the minimum of total potential energy). If we measure all elongations compared to the new equilibrium, we could forget about both gravity and initial elongation: they compensate each other. I think this might have been the logic of authors of the solution. However, this logic only works if we measure elongations compared to the position with minimal potential energy. If another block sticks to the first one, the equilibrium position is now shifted below, so the blocks start moving already from a non-equilibrium position and one needs to account for the potential energy of initial position as well - which was not done in the provided College Board solution. As an exercise I would suggest an interested reader to reproduce d_b's answer with this method of total potential taking the above consideration into account.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
• "let me just add a comment to elucidate the possible (erroneous in this case) reasoning by which one could try to forget about gravity." Just for clarity, are you saying that College Board's ignoring gravity is erroneous? Or are you agreeing with College Board? Dec 2, 2021 at 17:31 • I disagree with the College Board answer and solution. In my answer I describe a method which they presumably used when solving the problem, but I point out that they forgot to account for the fact that equilibrium position of two blocks on a spring differs from the equilibrium position of one block. Dec 2, 2021 at 17:56 • Walk me through this. (Remember, I'm still in high school...) Does that mean that once the elastic collision happens and the identical block "sticks," the equilibrium position changes in that moment? Dec 2, 2021 at 18:00 • Firstly, if they stick, it is called an inelastic collision. Changing equilibrium position means that if one block of mass $m$ stretches the spring in equilibrium by $d$, two such blocks would stretch it by $2d$. So if in this problem we attached the second block to the first one without any velocity, they would move down to reach the new equilibrium position. To prevent confusion, let me stress that the word equilibrium here is used to describe a situation when all forces are balanced and nothing moves (not to be confused with the equilibrium length of the spring - when there are no forces). Dec 2, 2021 at 19:06
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
• How does conservation of momentum play into this? As I see it, (5kg)(5m/s)=(5+5kg)(v); v=2.5m/s. So according to the conservation of momentum, the two blocks stuck together will move at 2.5m/s immediately following the collision. That being said, they will decelerate because the force of gravity (2mg) will outpower the tension of the spring (mg) at that moment. Because of this, the equilibrium is disturbed, and the amount of compression is reduced. CB's answer assumes equilibrium is NOT disturbed and thus the blocks' velocity does not decelerate, right? Dec 2, 2021 at 20:27
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
With a 5 kg mass hanging at rest from the spring of constant k = 250 N/m, the stretch of the spring will be X = mg/k =5(9.8)/250 = 0.196 m. With (x) measured positive down from the un-stretched position, and gravitational potential energy chosen to be zero when x = 0, then the energy of the 10 kg mass is (1/2)(10)$$2.5^2$$ + (1/2)(250)$$0.196^2$$ - 10(9.8)(0.196) = (1/2)(250)$$x^2$$ -10(9.8)x. (x when v = 0) Or: 125$$x^2$$ - 98x – (31.25 +4.802 – 19.208) = 0. Solving gives x = - 0.145 m. (The other solution is for initial velocity downward). Since positive x was measured down, this negative x represents a compression of the spring above the unstretched position. Then the total rise is 0.196 + 0.145 = 0.341 m. For the record, if the moving mass is the same as the hanging mass: then mg = kX and (1/2)m$$v^2$$ + (1/2)k$$X^2$$ - (kX)X = (1/2)k$$x^2$$ - (kX)x. Or rearranged: (1/2)m$$v^2$$ = (1/2)k$$(X^2 – 2Xx + x^2) = (1/2)k(X-x)^2$$. Lets try ignoring gravity with x measured from a 10 kg hanging position (which is an additional 0.196m further down): (1/2)(10)$$2.5^2$$ + (1/2)(250)$$0.196^2$$ = (1/2)(250)$$x^2$$. Rearranging: 125$$x^2$$ = (31.25 + 4.802) Giving x = 0.537. With the starting position 0.196 m above the new equilibrium, this gives a rise of 0.341 m. The bottom line: A mass hanging from a spring defines a new equilibrium position. The kx measured from that position includes the force of gravity (but don't change the mass). Here is an alternative approach: With the stretch of the spring (x) measured positive down from the unstretched position, the net force on the hanging mass is: F = mg - kx Then dF = - k(dx). Integrating both sides gives F(x) - $$F_o$$ = -k(x - $$x_o$$). If starting from the new equilbrium then $$F_o$$ =0. • Please consider using MathJax; in its current form this answer is mostly illegible. – rob Jan 23, 2022 at 13:55
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747182, "lm_q1q2_score": 0.8511046750447819, "lm_q2_score": 0.8902942283051332, "openwebmath_perplexity": 513.1419705121123, "openwebmath_score": 0.7946702241897583, "tags": null, "url": "https://physics.stackexchange.com/questions/680196/college-board-problem-conservation-of-momentum-conservation-of-energy-in-a-s" }
# When to simplify a quadratic equation? I had the following quadratic equation: $$38x^2 - 140x - 250 = 0$$ And before starting to solve it, I simplified it by dividing all terms by $2$: $$19x^2 -70x - 125 = 0$$ But when I solved it I got: $$x= \frac{35\pm5\sqrt{163}}{19}$$ which is not the exact solution for the original equation. So, I returned to that first equation and solved it. The roots are: $x=-\frac{25}{19}, 5$ Now my question is: How would I know when to simplify a quadratic equation before solving it? Is there something that shows me that I'm right to simplify or not? Thank you for your help. • You must have made an error because both those quadratics have the same roots, multiplying or dividing through by the same number doesn't change the roots – Triatticus Jun 15 '16 at 13:46 • @Triatticus: I checked with Wolfram alpha – Rafiq Jun 15 '16 at 13:47 • It should be $x=\frac{35\pm 60}{19}$, not $x=\frac{35\pm 5\sqrt{163}}{19}$. – user236182 Jun 15 '16 at 13:47 • your square root is wrong... $\sqrt{70^2+4.19.125}$ calculate again – Kushal Bhuyan Jun 15 '16 at 13:47 • I did too and got the same answers – Triatticus Jun 15 '16 at 13:48 You can always simplify a quadratic equation (by dividing out common numeric factors) before solving it. Simplifying like this is optional and is always a good idea because it generally gives you easier numbers to work with. The error is not because you simplified, it's because there is a mistake somewhere in your work after that. I can't tell you where exactly without seeing your work, but here's how it should go: \begin{align*} x &= \frac{70 \pm \sqrt{(-70)^2 - 4(19)(-125)}}{2(19)}\\[0.3cm] &= \frac{70 \pm \sqrt{4900 + 9500}}{38}\\[0.3cm] &= \frac{70 \pm \sqrt{14400}}{38} \\[0.3cm] &= \frac{70 \pm 120}{38} \end{align*} When simplified you get $x = 5$ and $x = -25/19$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747181, "lm_q1q2_score": 0.8511046632184105, "lm_q2_score": 0.8902942159342104, "openwebmath_perplexity": 235.06836431886057, "openwebmath_score": 0.9799789786338806, "tags": null, "url": "https://math.stackexchange.com/questions/1827223/when-to-simplify-a-quadratic-equation" }
When simplified you get $x = 5$ and $x = -25/19$. When to simplifying a quadratic is up to you. Remember, multiplying both sides by a number retains the equality, the same thing with adding, subtracting and dividing. You could've left it the way it is, without simplifying, and you would have gotten the solution as it doesn't affect the roots. It is for last equation :$$x=\frac { 35\pm \sqrt { { 35 }^{ 2 }+125\cdot 19 } }{ 19 } =\frac { 35\pm \sqrt { 3600 } }{ 19 } =\frac { 35\pm 60 }{ 19 }$$ But using $\frac{\sqrt{b^2-4ac}}{2a}$ in the reduced equation gives the same roots .
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813463747181, "lm_q1q2_score": 0.8511046632184105, "lm_q2_score": 0.8902942159342104, "openwebmath_perplexity": 235.06836431886057, "openwebmath_score": 0.9799789786338806, "tags": null, "url": "https://math.stackexchange.com/questions/1827223/when-to-simplify-a-quadratic-equation" }
# Prove the sequence $f_{n} = \frac{1}{n^2+1}$ is a Cauchy sequence. Prove the sequence $$f_{n} = \frac{1}{n^2+1}$$ is a cauchy sequence. I'm just making sure my logic and reasoning is sound for the above proof: Definition of cauchy sequence: $$f_n$$ is Cauchy if for all $$Ɛ$$ there is an $$M \in \mathbb{N}$$ such that for all $$n,m > M, |f_n-f_m|<Ɛ$$ $$|f_n-f_m|$$=$$|\frac{1}{n^2+1}-\frac{1}{m^2+1}|$$ by the triangle inequality: $$\le|\frac{1}{n^2+1}|+|\frac{1}{m^2+1}|$$ $$<\frac{1}{n}+\frac{1}{m}$$ To ensure $$\frac{1}{n}+\frac{1}{m}<Ɛ$$ it suffices to have: $$max(\frac{1}{n}+\frac{1}{m}) < \frac{Ɛ}{2}$$, so we will take $$M(Ɛ) = \lceil\frac{2}{Ɛ}\rceil$$ Now for a formal proof: Let $$Ɛ>0$$ Define $$M(Ɛ) = \lceil\frac{2}{Ɛ}\rceil$$ Let $$n,m > M(Ɛ)$$ $$n,m > \frac{2}{Ɛ}$$ $$\frac{1}{n}<\frac{Ɛ}{2}$$ and $$\frac{1}{m}<\frac{Ɛ}{2}$$ $$|f_n - f_m| \le \frac{1}{n}+\frac{1}{m}$$ $$|f_n - f_m| < \frac{Ɛ}{2}+\frac{Ɛ}{2} = Ɛ$$ Therefore $$f_n$$ is cauchy. Solution to the proof • Yes, your proof is fine. – Mark May 4 at 8:53 • Okay great, I was just checking as if you see in my question above, the solution was different. I thought my answer was too simple. – user11015000 May 4 at 8:55 • Please do not use pictures for critical portions of your post. Pictures may not be legible, cannot be searched and are not viewable to some, such as those who use screen readers. – GNUSupporter 8964民主女神 地下教會 May 4 at 15:31 • The proof before the red edit is fine. – DanielWainfleet May 4 at 18:52
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180643966651, "lm_q1q2_score": 0.8510607083321147, "lm_q2_score": 0.863391611731321, "openwebmath_perplexity": 742.2232511189287, "openwebmath_score": 0.766474723815918, "tags": null, "url": "https://math.stackexchange.com/questions/3213122/prove-the-sequence-f-n-frac1n21-is-a-cauchy-sequence" }
# Approximating continuous functions with polynomials Given $\epsilon \gt 0$ and $f \in C^{0}[0,1]$, Weierstrass says that I can find at least one (how many? probably a lot?) polynomial $P$ which approximates f uniformly: $$\sup_{x \in [0,1]} |f(x) - P(x)| \lt \epsilon$$ This means that, under the sup norm $||.||_{\infty}$, the polynomials are dense in $C^{0}[0,1]$. So, in analogy to approximating irrationals with the rationals, I would like to know: • What can we say about the order of $P$? Or, turning this around, given that $P$ is of order $n$, how small can $\epsilon$ get? I'm betting this should depend in some way on the properties of $f$: the intuition is that smoother functions should be somehow "better" approximated by lower-order polynomials, and less-well-behaved functions should require higher-order polynomials. But I am not sure how to formalize this. This is probably all well-understood, but I'm not well-read on approximation theory. Any guidance would be wonderful. - All of the answers have been very helpful, thank you to everyone! –  AndrewG Jan 5 '13 at 19:58 According to Theorem 1.2 of this paper by Sukkrasanti and Lerdkasem, we have the following result (with impressively great generality, I might add): Let $f: [0,1]^p \rightarrow \mathbb{R}^q$ be bounded. Then there exists $C$ such that $$\| f - B_n(f) \|_{\infty} \le C \omega(1/\sqrt{n})$$ where $\omega(\delta) := \sup_{\|t_1 - t_2\| \le \delta} \|f(t_1) - f(t_2)\|$ and $B_n(f)$ is the $n$th Bernstein polynomial of $f$. I do not claim to have read the paper myself. Notice that if $f$ is indeed continuous, then by uniform continuity, as $n \rightarrow \infty$, $\omega(1/\sqrt{n})$ must go to zero. So the convergence rate is related to how rapidly $f$ can vary, as intuition would already suggest. - Define, $B_i^n(x)=\binom{n}{i}x^i(1-x)^{n-i}$, nice fact, • Partition of the unity $\sum_{i=0}^{n}B_i^n(x)=1$ this is easy to see:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180664944448, "lm_q1q2_score": 0.8510606962838053, "lm_q2_score": 0.8633915976709976, "openwebmath_perplexity": 395.76866527482014, "openwebmath_score": 0.94944828748703, "tags": null, "url": "http://math.stackexchange.com/questions/271013/approximating-continuous-functions-with-polynomials" }
• Partition of the unity $\sum_{i=0}^{n}B_i^n(x)=1$ this is easy to see: See $1= x+(1-x)$ then, $$1=(x+(1-x))^n=\sum_{i=0}^{n}\binom{n}{i}x^i(1-x)^{n-i}=\sum_{i=0}^{n}B_i^n(x).$$ Without loss of generality suppose $a=0$ and $b=1$. We need of the following estimative: \begin{eqnarray*} \sum_{i=0}^{n}\left(x-\frac in\right)^{2}B_i^n(x)&=&x^2\sum_{i=0}^{n}B_i^n(x)-2x\sum_{i=0}^{n}\dfrac{i}{n}B_i^n(x)+\sum_{i=0}^{n}\left(\dfrac{i}{n}\right)^2 B_i^n(x)\\&=& x^2-2x^2+\dfrac{1}{n^2}\sum_{i=0}^{n}n(n-1)\dfrac{i^2-i +i}{n(n-1)} B_i^n(x)\\&=& -x^2+ \dfrac{1}{n}\sum_{i=0}^{n}\dfrac{i}{n}B_i^n(x)+\dfrac{n(n-1)}{n^2}\sum_{i=0}^{n}\dfrac{i(i-1)}{n(n-1)}B_i^n(x)\\&=& -x^2+\dfrac{x}{n}+\dfrac{n-1}{n}x^2\\&=& \dfrac{x-x^2}{n}\leqslant\dfrac{1}{4n}. \end{eqnarray*} Now, since $f$ is continuous we have that $f$ is uniformly continuous,because $[0,1]$ is compact. So for each $\epsilon$ exists $\delta>0$ such that for all $x,y\in [0.1]$ with $|x-y|<\delta$ implies $|f(x)-f(y)|<\epsilon$. Since $f$ have yours extreme extreme values in $[0,1]$ then exists a constant $M$ such that $|f(x)|\leqslant M$ for all $x\in[0,1].$ Consider the Berstein polynomial of $f$, $$P_n(x)=\sum_{i=0}^{n}f\left(\dfrac{i}{n}\right)B_i^n(x);$$ since $\displaystyle\sum_{i=0}^{n}B_i^n(x)=1$, we have $f(x)=\displaystyle\sum_{i=0}^{n}f(x)B_i^n(x)$ then, \begin{eqnarray*} \left|f(x)-P_n(x)\right|&\leqslant& \sum_{i=0}^{n}\left|f(x)-f\left(\dfrac{i}{n}\right)\right|B_i^n(x)\\&=& \sum_{S_1}\left|f(x)-f\left(\dfrac{i}{n}\right)\right|B_i^n(x)+\sum_{S_2}\left|f(x)-f\left(\dfrac{i}{n}\right)\right|B_i^n(x) \end{eqnarray*} where $S_1=\{0\leqslant i\leqslant n ~;~ |x-\frac{i}{n}|<\delta\}$ and $S_2=\{0\leqslant i\leqslant n~;~ |x-\frac{i}{n}|\geqslant \delta\}.$ analyzing each sum separately $$\sum_{S_1}\left|f(x)-f\left(\dfrac{i}{n}\right)\right|B_i^n(x)\leqslant \sum_{S_1}\epsilon|B_i^n(x)\leqslant \epsilon\sum_{i=0}^{n}B_i(x)=\epsilon$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180664944448, "lm_q1q2_score": 0.8510606962838053, "lm_q2_score": 0.8633915976709976, "openwebmath_perplexity": 395.76866527482014, "openwebmath_score": 0.94944828748703, "tags": null, "url": "http://math.stackexchange.com/questions/271013/approximating-continuous-functions-with-polynomials" }
\begin{eqnarray*} \sum_{S_2}\left|f(x)-f\left(\dfrac{i}{n}\right)\right|B_i^n(x)&\leqslant& 2M\sum_{S_2}B_i^n(x)\leqslant \sum_{S_2}\dfrac{(x-\frac{i}{n})^2}{\delta^2}B_i^n(x)\\&\leqslant&\dfrac{2M}{\delta^2}\sum_{i=0}^{n}(x-\frac{i}{n})^2 B_i^n(x)\leqslant \dfrac{M}{2\delta^2 n} \end{eqnarray*} since $\dfrac{M}{2\delta^2 n}<\epsilon$ for $n$ large, we demonstrate the desired. - Nice answer. By the way, what does "daí" mean? –  Antonio Vargas Jan 5 '13 at 16:50 @AntonioVargas: I'm sorry, is the habit of writing in portuguese daí=so or then –  user27456 Jan 5 '13 at 16:58 @EduardoSiva: Everything looks fine, except that you have not defined $B_{i}$. I suppose that it denotes a Bernstein polynomial, am I right? –  Haskell Curry Jan 5 '13 at 17:34 @HaskellCurry: I'll try to fix it –  user27456 Jan 5 '13 at 17:43 A near minimax approximation to a continuous function, say, $f:[-1,1]\to\mathbb{R}$, can be obtained with Chebyshev polynomials. You may look up the details in this book chapter, of which theorem 3.10 says that if $p$ is a Chebyshev interpolation polynomial for such continuous $f:[-1,1]\to\mathbb{R}$ at $n+1$ points, then $\|f-p\|\sim C\,\omega(\frac1n)\log n$ as $n\to\infty$, where the norm is the maximum norm, $\omega(\cdot)$ denotes the modulus of continuity and $C$ is a constant. - Since you tagged your question by "reference-request" I will give only them (instead of copying a lot). Jackson inequality ("direct theorem"), and Bernstein theorem ("converse theorem"). See also the references therein. (My humble opinion they are good quality books. Typing into Google these keywords you will obtain a lot of links, including many free downloadable lecture notes, maybe books.) -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180664944448, "lm_q1q2_score": 0.8510606962838053, "lm_q2_score": 0.8633915976709976, "openwebmath_perplexity": 395.76866527482014, "openwebmath_score": 0.94944828748703, "tags": null, "url": "http://math.stackexchange.com/questions/271013/approximating-continuous-functions-with-polynomials" }
# Calculating work 1. Jul 10, 2009 ### jeff1evesque 1. The problem statement, all variables and given/known data Given the attached picture, Calculate the work required to go around the contour shown if the force is $$\vec{F} = y\hat{x} - x^{2} \hat{y}$$ First by Contour integration: Work = $$\oint \vec{F} \bullet \vec{dl} = \int_{a}^{b} (\vec{F} \bullet \hat{x})dx + \int_{b}^{c} (\vec{F} \bullet \hat{y})dy - \int_{c}^{d} (\vec{F} \bullet \hat{x})dx - \int_{d}^{a} (\vec{F} \bullet \hat{y})dy$$ $$= \int_{1}^{2} (2) dx + \int_{2}^{3} (4)dy - \int_{2}^{1} (3)dx - \int_{3}^{2} (1)dy = 0$$ I think the answer above is suppose to be -4 not 0. Now by Stokes Theorem: When this was done with "Stokes theorem", we get the following: Work = $$\oint \vec{F} \bullet \vec{dl} \equiv \int \int (\nabla \times \hat{F})\bullet \vec{ds} = - \int \int (2x + 1)( \hat{z} \bullet \hat{z})ds = \int_{2}^{3} \int_{1}^{2} (2x + 1)dxdy = -4$$ Note: $$( \hat{z} \bullet \hat{z}) = 1$$ since $$( \hat{z} \bullet \vec{ds}) = ds$$ Question: When I look at this, Stokes method seems apparently correct, but the method before it seems wrong. Can someone help me find my error? Thanks, JL #### Attached Files: • ###### StokePic.bmp File size: 297 KB Views: 67 Last edited: Jul 10, 2009 2. Jul 11, 2009 ### E_M_C Your attachment is still pending approval, but if we're speaking of the work done around a closed path, it's certainly not zero. This is due to the fact that the force field is not conservative. While the force is position dependent (as is required by a conservative force), it's curl, $$\nabla \times F$$, is not the zero vector. 3. Jul 11, 2009 ### HallsofIvy Staff Emeritus The contour is the square with corners at (1, 2), (2, 2), (2, 3), and (1, 3).
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864287859482, "lm_q1q2_score": 0.8510554516631595, "lm_q2_score": 0.8596637559030338, "openwebmath_perplexity": 859.161829378275, "openwebmath_score": 0.9222193360328674, "tags": null, "url": "https://www.physicsforums.com/threads/calculating-work.324424/" }
Staff Emeritus The contour is the square with corners at (1, 2), (2, 2), (2, 3), and (1, 3). Why do you have minus signs on the last two integrals? Assuming that your contour goes from point a to point b, then to point c, then to point d, then back to point a, these should all be "+". (Although the integrals themselves might be 0.) On the first leg, with x= t, y= 2, the integral is $\int_1^2 2 dx$ as you say. On the second leg, with x= 2, y= t, the integral is $\int_2^3 (-4) dt$. Did you forget the "-" on "$-x^2\vec{j}$"? On the third leg, with x= t, y= 3, the integral is $\int_2^1 3dt= -\int_1^2 3dt$. You have the wrong sign. On the fourth leg, with x= 1, y= t, the integral is $\int_3^2(-1) dt= \int_2^3 dt$. That is what you have, but, I think, because of two canceling sign errors!. Evaluating, the integral is 2- 4+ (-3)+ (1)= -4. 4. Jul 11, 2009 ### jeff1evesque Thanks Halls.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864287859482, "lm_q1q2_score": 0.8510554516631595, "lm_q2_score": 0.8596637559030338, "openwebmath_perplexity": 859.161829378275, "openwebmath_score": 0.9222193360328674, "tags": null, "url": "https://www.physicsforums.com/threads/calculating-work.324424/" }
# Picking random numbers as long as they keep decreasing. Expected number of numbers you pick? Pick a random number (evenly distributed) between $0$ and $1$. Continue picking random numbers as long as they keep decreasing; stop picking when you obtain a number that is greater than the previous one you picked. What is the expected number of numbers you pick? - One of my friend send me this question,& he sends me the ans.. – UnknownController Nov 24 '12 at 6:25 Please, try to make the title of your question more informative. E.g., Why does $a<b$ imply $a+c<b+c$? is much more useful for other users than A question about inequality. From How can I ask a good question?: Make your title as descriptive as possible. In many cases one can actually phrase the title as the question, at least in such a way so as to be comprehensible to an expert reader. – Julian Kuelshammer Nov 24 '12 at 6:41 @JulianKuelshammer thank you...next time I'll give more informative title.. – UnknownController Nov 24 '12 at 6:45 There're $n!$ ways of arranging $n$ numbers, supposing that there're $n$ picks, then the first $(n-1)$ picks are in descending order, there are $(n-1)$ ways of choosing the first $(n-1)$ numbers and thus the probability of picking just $n$ nimbers is $\cfrac {n-1}{n!}$ and the expected value is $$E=\sum^{\infty}_{n=2} n\cdot\cfrac {n-1}{n!}= \sum^{\infty}_{n=2} \cfrac {1}{(n-2)!}=\sum^{\infty}_{n=0} \cfrac {1}{n!}=e=2.718281828459...$$ - thank you my friend for tried to help,I just post ans. in comment see that :) – UnknownController Nov 24 '12 at 6:27 The following solutions uses indicator random variables. Fix an $n$, and imagine doing the experiment only $n$ times. For $i\le n$, let $X_j=1$ if $X_1\gt X_2 \gt \cdots >X_j$, and let $X_j=0$ otherwise. Then $\Pr(X_j=1)=\frac{1}{j!}$ and therefore $E(X_j)=\frac{1}{j!}$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864271610316, "lm_q1q2_score": 0.8510554467068692, "lm_q2_score": 0.8596637523076225, "openwebmath_perplexity": 332.0418966266846, "openwebmath_score": 0.8999835848808289, "tags": null, "url": "http://math.stackexchange.com/questions/243559/picking-random-numbers-as-long-as-they-keep-decreasing-expected-number-of-numbe/243567" }
If $Y_n$ is the length of the longest monotone decreasing sequence that starts with the first number chosen the beginning, then $Y_n=X_1 +X_2+\cdots +X_n$. Thus, by the linearity of expectation, $$E(Y_n)=E(X_1)+E(X_2)+E(X_3)+\cdots+E(X_n)= 1+\frac{1}{2!}+\frac{1}{3!}+\cdots +\frac{1}{n!}.$$ As $n\to \infty$, this approaches $e-1$. But your random variable counts all the picks up to and including the first pick that breaks the monotonicity. So your random variable has expectation $(e-1)+1=e$. -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864271610316, "lm_q1q2_score": 0.8510554467068692, "lm_q2_score": 0.8596637523076225, "openwebmath_perplexity": 332.0418966266846, "openwebmath_score": 0.8999835848808289, "tags": null, "url": "http://math.stackexchange.com/questions/243559/picking-random-numbers-as-long-as-they-keep-decreasing-expected-number-of-numbe/243567" }
# Evaluating $\int\frac{x^2+1}{(x^2-2x+2)^2}dx$ - Section 7.3, #28 in Stewart Calculus 8th Ed. I'd like to evaluate $$I=\int\frac{x^2+1}{(x^2-2x+2)^2}dx.$$ According to WolframAlpha, the answer is $$\frac{1}{2}\bigg(\frac{x-3}{x^2-2x+2}-3\tan^{-1}(1-x)\bigg)+C.$$ To ensure that my answer is correct, I'd like to match it with this answer. First I complete the square by writing the denominator as $((x-1)^2+1)^2$, then use a trig-sub $x-1=\tan\theta$ so $dx=\sec^2\theta d\theta.$ Then $$I=\int\frac{(\tan\theta +1)^2+1}{\sec^4\theta}\sec^2\theta d\theta\\=\int\frac{\tan^2\theta+2\tan\theta+2}{\sec^2\theta}d\theta\\=\int\sin^2\theta +2\sin\theta\cos\theta+2\cos^2\theta d\theta\\=\int\bigg(\frac{1}{2}-\frac{\cos(2\theta)}{2}\bigg)+\sin(2\theta)+(1+\cos(2\theta))d\theta\\=\int\frac{3}{2}+\frac{\cos(2\theta)}{2}+\sin(2\theta)d\theta\\=\frac{3\theta}{2}+\frac{\sin(2\theta)}{4}-\frac{\cos(2\theta)}{2}+C\\=\frac{3\theta}{2}+\frac{\sin\theta \cos\theta}{2}-\frac{1-2\sin^2\theta}{2}+C.$$ Here's a picture corresponding to my trig-sub. From the picture, I conclude $$I=\frac{3\tan^{-1}(x-1)}{2}+\frac{1}{2}\frac{x-1}{\sqrt{x^2-2x+2}}\frac{1}{\sqrt{x^2-2x+2}}- \frac{1-2\frac{(x-1)^2}{x^2-2x+2}}{2}+C\\=\frac{-3\tan^{-1}(1-x)}{2}+\frac{x-1}{2(x^2-2x+2)}-\frac{1}{2}+\frac{(x-1)^2}{x^2-2x+2}+C.$$ Although the $\arctan$ portion of the answer matches Wolfram, the remaining portion does not (since there is a square in the numerator after finding a common denominator). Where am I making the mistake? I've also tried using the identity $\cos(2\theta)= 2\cos^2\theta-1$ instead of $\cos(2\theta)= 1-2\sin^2\theta$, but the same issue occurs.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864278996301, "lm_q1q2_score": 0.8510554420027029, "lm_q2_score": 0.8596637469145054, "openwebmath_perplexity": 142.09263149288228, "openwebmath_score": 0.9633340239524841, "tags": null, "url": "https://math.stackexchange.com/questions/2585274/evaluating-int-fracx21x2-2x22dx-section-7-3-28-in-stewart-calc" }
Your work is correct; the only difference between your antiderivative and the one calculated by WolframAlpha is a constant of integration, $1/2$. You may verify that $$\frac{(x-1)^2}{x^2-2 x+2}+\frac{x-1}{2 \left(x^2-2 x+2\right)}-\frac{1}{2} = \frac{x^2-x-1}{2 \left(x^2-2 x+2\right)},$$ and it follows that $$\frac{x^2-x-1}{2 \left(x^2-2 x+2\right)}-\frac{x-3}{2 \left(x^2-2 x+2\right)} = \frac{1}{2}.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864278996301, "lm_q1q2_score": 0.8510554420027029, "lm_q2_score": 0.8596637469145054, "openwebmath_perplexity": 142.09263149288228, "openwebmath_score": 0.9633340239524841, "tags": null, "url": "https://math.stackexchange.com/questions/2585274/evaluating-int-fracx21x2-2x22dx-section-7-3-28-in-stewart-calc" }
# Combinatorial explanation to following recurrence relation $a_n = 2 a_{n-1} + a_{n-2}$ Question was the following: $a_n$ is the number of ternary strings (strings of 0,1,2) which contain no consecutive zeros and no consecutive ones. Find a formula for $a_n$? By brute force, I found a recurrence relation for $a_n$ as the following: $a_n = 2 a_{n-1} + a_{n-2}$ Now I am wondering if there is a good combinatorial explanation/proof to my recurrence relation. Can you see something? Best Regards - Terminological note: those are ternary strings, not turning strings. –  Brian M. Scott Apr 8 '13 at 20:00 @BrianM.Scott thank you for correction –  Xentius Apr 8 '13 at 20:02 Let $b_{i,n}$ be the number of strings of length $n$ starting with $i\in\{0,1,2\}$. Then by your rules $$a_n = b_{0,n} + b_{1,n} + b_{2,n} = (b_{1,n-1} + b_{2,n-1}) + (b_{0,n-1} + b_{2,n-1}) + a_{n-1} = 2a_{n-1} + a_{n-2}.$$ ADDITION: This transformation can be translated into a proof by bijection. Let $X_n$ be the set of all sequences of length $n$. We define a map $$(X_{n-1} \times \{A,B\}) \cup X_{n-2} \quad\to\quad X_{n}$$ by ("$\cdot$" denotes concatenation; $\mu$ a sequence in $X_{n-2}$ and $\lambda$ a sequence in $X_{n-1}$): $(\lambda,A) \mapsto 2\cdot\lambda$ $(\lambda,B)\mapsto\begin{cases} 1\cdot \lambda & \text{if }\lambda\text{ starts with }0\text{ or }2\\ 0\cdot \lambda & \text{if }\lambda\text{ starts with }1 \end{cases}$ $\mu \mapsto 0\cdot 2\cdot \mu$ This is a bijection, essentially because $(\lambda,A)$ gives all strings in $X_n$ starting with $2$, $(\lambda,B)$ gives all strings in $X_n$ starting with $10$, $01$ or $12$, and $\mu$ gives all strings starting with $02$. So $$\left|(X_{n-1} \times \{A,B\}) \cup X_{n-2}\right| = \left|X_n\right|,$$ which is $$2a_{n-1} + a_{n-2} = a_n.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864302631448, "lm_q1q2_score": 0.8510554404751222, "lm_q2_score": 0.8596637433190939, "openwebmath_perplexity": 206.9099557598512, "openwebmath_score": 0.6358327269554138, "tags": null, "url": "http://math.stackexchange.com/questions/355151/combinatorial-explanation-to-following-recurrence-relation-a-n-2-a-n-1-a/355159" }
- I guess what @Xentius meant by a combinatorial proof is not this. He seems to be looking for an answer like in this question he asked: math.stackexchange.com/questions/340905/… –  Amadeus Bachmann Apr 8 '13 at 19:42 @AmadeusBachmann exactly! –  Xentius Apr 8 '13 at 19:45 @AmadeusBachmann: I think that azimut's equality has pure (and nice!) combinatorial sense. –  xen Apr 8 '13 at 19:55 I'm a bit baffled why my argumentation shouldn't be combinatorial. @Xentius could you please specify what kind of proof you are looking for? –  azimut Apr 8 '13 at 20:02 @xen I did not say azimut's proof is not combinatorial. I just wanted to say it may not be what Xentius looking for according to his previous questions. But of course only one to confirm this is Xentius. –  Amadeus Bachmann Apr 8 '13 at 20:06 Call a ternary string with no consecutive zeroes and no consecutive ones a good string. For $k=0,1,2$ let $a_n^k$ be the number of good strings of length $n$ ending in $k$. Then it’s clear that \begin{align*} &a_n^0=a_{n-1}^1+a_{n-1}^2\\ &a_n^1=a_{n-1}^0+a_{n-1}^2\\ &a_n^2=a_{n-1}^0+a_{n-1}^1+a_{n-1}^2=a_{n-1}\;. \end{align*}\tag{1} And now we can make the following computation: \begin{align*} a_n&=a_n^0+a_n^1+a_n^2\\ &=a_n^0+a_n^1+a_{n-1}\\ &=a_{n-1}^1+a_{n-1}^0+2a_{n-1}^2+a_{n-1}\\ &=(a_{n-1}^0+a_{n-1}^1+a_{n-1}^2)+a_{n-1}^2+a_{n-1}\\ &=a_{n-1}+a_{n-1}^2+a_{n-1}\\ &=2a_{n-1}+a_{n-1}^2\\ &=2a_{n-1}+a_{n-2} \end{align*} by the third line of $(1)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864302631448, "lm_q1q2_score": 0.8510554404751222, "lm_q2_score": 0.8596637433190939, "openwebmath_perplexity": 206.9099557598512, "openwebmath_score": 0.6358327269554138, "tags": null, "url": "http://math.stackexchange.com/questions/355151/combinatorial-explanation-to-following-recurrence-relation-a-n-2-a-n-1-a/355159" }
by the third line of $(1)$. In more purely combinatorial terms the term $a_{n-1}$ in $a_n=a_n^0+a_n^1+a_{n-1}$ comes from the fact that each good string of length $n$ ending in $2$ is obtained by appending a $2$ to one of the $a_{n-1}$ good strings of length $n-1$. The rest is a little more complicated. If a good string of length $n-1$ ends in $0$ or $2$, I can append a $1$ to it to get a good string of length $n$ ending in $1$; that covers all good strings of length $n$ ending in $01$ or $21$, which is all good strings of length $n$ ending in $1$. Similarly, if a good string of length $n-1$ ends in $1$ or $2$, I can append a $0$ to get a good string of length $n$ ending in $0$, and that accounts for all good strings of length $n$ ending in $0$. Thus, to get the good strings of length $n$ ending in $0$ or $1$ I’ve used each good string of length $n-1$ ending in $0$ once, each good string of length $n-1$ ending in $1$ once, and each good string of length $n-1$ ending in $2$ twice. In other words, I’ve used each good string of length $n-1$ at least once, and I’ve used those ending in $2$ twice. Ignoring for a moment the double use of those ending in $2$, that accounts for another $a_{n-1}$ strings of length $n$. Finally, we already know that each good string of length $n-1$ ending in $2$ is just the extension by a $2$ of a good string of length $n-2$, so the double use of good strings of length $n-1$ ending in $2$ gives us another $a_{n-2}$ good strings of length $n$. Put the pieces together, and we see that $a_n=2a_{n-1}+a_{n-2}$. -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864302631448, "lm_q1q2_score": 0.8510554404751222, "lm_q2_score": 0.8596637433190939, "openwebmath_perplexity": 206.9099557598512, "openwebmath_score": 0.6358327269554138, "tags": null, "url": "http://math.stackexchange.com/questions/355151/combinatorial-explanation-to-following-recurrence-relation-a-n-2-a-n-1-a/355159" }
# Formal way to evaluate this limit I've some doubt about the formal way to evaluate the following limit: $$\lim_{n\to\ +\infty}{\left(1-\frac{1}{n}\right)^{n^2}}$$ I know that $$\lim_{n\to\ +\infty}{\left(1-\frac{1}{n}\right)^{n}}=e^{-1}$$ my calculus teacher says that we can't evaluate the limit by pieces, so we can't (in general) say that $$\lim_{n\to\ +\infty}{\left(1-\frac{1}{n}\right)^{n^2}}=\lim_{n\to\ +\infty}{\left(e^{-1}\right)^{n}}=0$$ even if in this case it is the right answer. So I'm asking a formal way to solve this, my approach is: Say that the limit of a product is the product of the limits, so I can say that $$\lim_{n\to\ +\infty}{\left(1-\frac{1}{n}\right)^{n^2}}=\lim_{n\to\ +\infty}{e^{n^2\ln\left(1-\frac{1}{n}\right)}}=e^{\lim_{n\to\ +\infty}{n^2\ln\left(1-\frac{1}{n}\right)}}=e^{\lim_{n\to\ +\infty}{n^2}{\lim_{n\to\ +\infty}\ln\left(1-\frac{1}{n}\right)}}=e^{\lim_{n\to\ +\infty}{-n}}=0$$ Can I do this (I've used the continuity of the exponential function) or am I still doing the limit by pieces when I've substituted $ln\left(1-\frac{1}{n}\right)$ with $-\frac{1}{n}$? Furthermore, can I use the theorem of the product even if there is the indeterminate form $0\cdot(+\infty$)? • Implicitly, you used an equivalent near $0$: we know that $\ln(1+u)\sim_0 u$. All usual functions have equivalents near $0$. This way saves many useless computations unrelated to the indetermination. Sep 21, 2017 at 10:58 • +1 for your question as well as your teacher. Unfortunately most book authors / instructors manage with lot of hand waving in teaching calculus. Sep 21, 2017 at 12:12
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769078156284, "lm_q1q2_score": 0.8510419656614477, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 247.78401755109877, "openwebmath_score": 0.9486010670661926, "tags": null, "url": "https://math.stackexchange.com/questions/2438737/formal-way-to-evaluate-this-limit" }
No, you can't. However, you may note that, as $n\to +\infty$, $$n^2\ln\left(1-\frac{1}{n}\right)=n\ln\left(\left(1-\frac{1}{n}\right)^n\right)\to +\infty \cdot \ln(e^{-1})=-\infty.$$ Hence $$\lim_{n\to\ +\infty}{\left(1-\frac{1}{n}\right)^{n^2}}=\lim_{n\to\ +\infty}{e^{n^2\ln\left(1-\frac{1}{n}\right)}}=e^{-\infty}=0.$$ Another way. Since $\lim_{n\to\ +\infty}{\left(1-\frac{1}{n}\right)^{n}}=e^{-1}\in (0,1/2)$ then, by definition of limit, there is $N$ such that for all $n\geq N$, $\left(1-\frac{1}{n}\right)^{n}<1/2$. Hence, for $n\geq N$, $$0<{\left(1-\frac{1}{n}\right)^{n^2}}=\left(\left(1-\frac{1}{n}\right)^{n}\right)^n<\frac{1}{2^n}.$$ Then use the Squeeze theorem. • @Mephlip Any further doubt? Sep 21, 2017 at 11:48 One way of doing it is by comparison. For any fixed $k\in \Bbb N$, we have $$0\leq \lim_{n\to \infty}\left(1-\frac 1n\right)^{n^2}\leq\lim_{n\to \infty}\left(1-\frac1n\right)^{nk} = e^{-k}$$ The trick to do what you're trying to do is, after making your substitution, you also factor in a correction so that you leave the formula unchanged: $$\lim_{n \to \infty} \left(1 - \frac{1}{n} \right)^{n^2} = \lim_{n \to \infty} \left( e^{-1} \cdot \frac{\left(1 - \frac{1}{n}\right)^{n}}{e^{-1}} \right)^n = \lim_{n \to \infty} e^{-n} \cdot \left( \frac{\left(1 - \frac{1}{n}\right)^{n}}{e^{-1}} \right)^n \\= \lim_{n \to \infty}\left( e^{-n} \right) \cdot \lim_{n \to \infty} \left( \frac{\left(1 - \frac{1}{n}\right)^{n}}{e^{-1}} \right)^n$$ assuming, of course, the limits exist and the product is defined. If you could show that second limit was $1$, then you'd compute the limit as $\infty \cdot 1 = \infty$ and be done. The problem, however, is that the second limit is a $1^{\infty}$ form! And it does this in a way that makes it so you can't immediately determine what the value should be: the value of the limit is a race between how fast the base approaches $1$ versus how fast the exponent reaches $\infty$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769078156284, "lm_q1q2_score": 0.8510419656614477, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 247.78401755109877, "openwebmath_score": 0.9486010670661926, "tags": null, "url": "https://math.stackexchange.com/questions/2438737/formal-way-to-evaluate-this-limit" }
So, there isn't any immediate shortcut here; you have to do more work. In fact, that limit equals $e^{-1/2}$, so the 'obvious' guesses are, in fact, wrong! You need a more sophisticated approximation to make something like this work. The thing to do can more easily be demonstrated in your alternate approach: the Taylor series for logarithm implies $$\ln(1 + x) = x + O(x^2)$$ So, in particular, $$\lim_{n \to \infty} n^2 \ln\left(1 - \frac{1}{n} \right) = \lim_{n \to \infty} n^2 \left( -\frac{1}{n} + O(n^{-2}) \right) = \lim_{n \to\ infty} -n + O(1) = -\infty$$ If you're not comfortable with $O$ notation, you could just use the next term of the Taylor series. If you put your mind to it, you should be able to obtain the exact value of the limit $$\lim_{n \to \infty} n^2 \left(\ln\left(1 - \frac{1}{n} \right) + \frac{1}{n} \right)$$ so you could alternately compute the limit as $$\lim_{n \to \infty} n^2 \ln\left(1 - \frac{1}{n} \right) = \left( \lim_{n \to \infty} n^2 \left(-\frac{1}{n}\right)\right) + \left( \lim_{n \to \infty} n^2 \left( \ln\left(1 - \frac{1}{n} \right) + \frac{1}{n} \right) \right)$$ Late answer since it was tagged as duplicate. An elementary way without any $$e$$ or logarithm could be: $$\begin{eqnarray*}\left(1-\frac{1}{n}\right)^{n^2} & = & \frac 1{\left(1+\frac{1}{n-1}\right)^{n^2}}\\ & \stackrel{\text{binomial formula}}{\leq} & \frac 1{\frac{n^2}{n-1}}\\ & < & \frac 1 n \stackrel{n\to \infty}{\longrightarrow} 0 \end{eqnarray*}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769078156284, "lm_q1q2_score": 0.8510419656614477, "lm_q2_score": 0.8723473862936942, "openwebmath_perplexity": 247.78401755109877, "openwebmath_score": 0.9486010670661926, "tags": null, "url": "https://math.stackexchange.com/questions/2438737/formal-way-to-evaluate-this-limit" }
# Is it correct to say all extrema happen at critical points but not all critical points are extrema? Is it correct to say all extrema happen at critical points but not all critical points are extrema? This question is for single variable calculus. But it would be great if someone could also provide insight to how well this statement generalizes to more complex functions (multi-variable functions, vector-valued functions, etc) • Extrema need not be critical points. They can also be the "end-points" in a given domain. This is what is called "absolute extrema". – Airdish Mar 31 '16 at 10:56 • All interior extrema are critical points. Of course, critical points need not be extrema. – Sangchul Lee Mar 31 '16 at 10:58 Every local extremum in the interior of the domain of a differentiable function is neccesarily a critical point, i.e. $f'(x)=0$ is a necessary condition for $x$ to be a local extremum. There are critical points which are not local extrema. Note that I'm stressing local extremum in the interior here. To account for global extrema, one needs to consider the behavior on the boundary of the domain as well. Consider the function $$f:[-1,1]\rightarrow \mathbb R, x\mapsto x^2.$$ $x_0=0$ is a critical point and indeed a local extremum (a minimum). For the global extrema note that $x^2 \leq 1$ for $x\in [-1,1]$ so that $f(1)=f(-1)=1$ and $x_1=1, x_2=-1$ are points where $f$ attains a (global) maximum, but they are not critical points. Similarly, for $$f:[-1,1]\rightarrow \mathbb R, x\mapsto x^3.$$ $x_0=0$ is a critical point, but there's no extremum. Maximum and Minimum are attained at $x_1=1$ and $x_2=-1$, neither of which are critical points.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419584262514, "lm_q2_score": 0.8723473763375644, "openwebmath_perplexity": 292.5421390087865, "openwebmath_score": 0.8134347200393677, "tags": null, "url": "https://math.stackexchange.com/questions/1721620/is-it-correct-to-say-all-extrema-happen-at-critical-points-but-not-all-critical" }
• When you say boundary points. You just mean boundary to domain (builtin or contrived, right?) What I mean is $2$ would be an inherent boundary point to $x/(x-2)$ and contrived ones would be $[-1,1]$. Sorry if this is not correct terminology. Or am I just thinking too hard? And the inherent ones are already covered by the $f'(c)=0$ or DNE clause? – AlanSTACK Apr 1 '16 at 1:13 • @Alan: I mean boundary points of the domain where the function is defined. – Roland Apr 1 '16 at 5:23 If you assume $f$ is a continuous, real-valued function on a closed, bounded interval $[a, b]$, then: • $f$ has at least one absolute maximum and at least one absolute minimum. (The Extreme Value Theorem.) • If $f$ has a local (relative) extremum at an interior point $x_{0}$, then either $f'(x_{0}) = 0$ or $f'(x_{0})$ does not exist. (I.e., $x_{0}$ is a critical point of $f$, modulo your definition of "critical point".) • If your definition of "critical point" includes endpoints of $[a, b]$, then "yes", every local extremum of $f$ is a critical point of $f$. (This convention is reasonable, since strictly speaking $f'$ does not exist at an endpoint of an interval. On the other hand, this convention is not universal.) For the other direction: • An interior critical point can fail to be a local extremum. Examples (with $x_{0} = 0$) include $f(x) = x^{3}$ and $$g(x) = \begin{cases} x & x < 0, \\ 2x & x \geq 0, \end{cases}$$ both viewed as functions on some interval containing $0$ in the interior. (In the first example, $f'(0) = 0$; in the second, $g'(0)$ does not exist.) • An endpoint can fail to be a local extremum, e.g. $$f(x) = \begin{cases} x^{2} \sin(1/x) & 0 < x, \\ 0 & x = 0. \end{cases}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419584262514, "lm_q2_score": 0.8723473763375644, "openwebmath_perplexity": 292.5421390087865, "openwebmath_score": 0.8134347200393677, "tags": null, "url": "https://math.stackexchange.com/questions/1721620/is-it-correct-to-say-all-extrema-happen-at-critical-points-but-not-all-critical" }
There are analogous results for real-valued functions of more than one variable, but they're a little more work to state, because not every "connected subset" of the plane (say) is a formal analog of a closed, bounded interval. Loosely, though, a continuous function $f$ on a closed, bounded subset $D$ of $\mathbf{R}^{n}$ has at least one absolute maximum and at least one absolute minimum in $D$, and these must occur either at (i) an interior point $x_{0}$ of $D$ where the total derivative $Df(x_{0})$ is zero; (ii) an interior point $x_{0}$ where $Df(x_{0})$ does not exist; or (iii) a boundary point of $D$. There is no analog for vector-valued functions because the concepts of "maximum" and "minimum" make no sense. (If $n > 1$, the set $\mathbf{R}^{n}$ is not ordered in any way compatible with vector addition and scalar multiplication.)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419584262514, "lm_q2_score": 0.8723473763375644, "openwebmath_perplexity": 292.5421390087865, "openwebmath_score": 0.8134347200393677, "tags": null, "url": "https://math.stackexchange.com/questions/1721620/is-it-correct-to-say-all-extrema-happen-at-critical-points-but-not-all-critical" }
• For the left endpoint $a$, the differential quotient $\lim_{h \rightarrow 0}\frac{f(a+h)-f(a)}{h}$ is a limit which may (or may not) exist, and should be regarded as a one-sided limit (i.e. only for $h>0$). If $f'(a)$ exists or not is dependent on the existence of this one-sided limit. – Roland Mar 31 '16 at 11:38 • @Roland: Agreed, but in my experience that criterion of differentiability at an endpoint is (also) a convention, and (with some justification) not a universal one. (If it matters, I'm not advocating either convention of endpoint differentiability, just noting that as stated, the OP's question can only be answered with qualifications.) – Andrew D. Hwang Mar 31 '16 at 11:55 • yet you write 'strictly speaking $f′$ does not exist at an endpoint of an interval', which is wrong for functions which are differentiable in a larger open domain which includes the interval $[a,b]$. – Roland Mar 31 '16 at 12:02 • Strictly speaking, "$f$ is differentiable at $x_{0}$" involves a two-sided limit at $x_{0}$. If you extend the definition at an endpoint by requiring only a one-sided limit, that's fine, but you're no longer invoking the usual definition of differentiability from one-variable calculus. (If you're saying the extended definition is universal, I have no mathematical grounds to argue otherwise. I'm merely saying that this extension is not tacit in elementary calculus. For instance, $f(x) = |x|$ is not differentiable at $0$, but is differentiable on $[0, \infty)$ by your definition.) – Andrew D. Hwang Mar 31 '16 at 12:36
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419584262514, "lm_q2_score": 0.8723473763375644, "openwebmath_perplexity": 292.5421390087865, "openwebmath_score": 0.8134347200393677, "tags": null, "url": "https://math.stackexchange.com/questions/1721620/is-it-correct-to-say-all-extrema-happen-at-critical-points-but-not-all-critical" }
• @Roland It is explicitly stated in some real analysis textbooks (ex. Apostol) that for a derivative $f'(c)$ to exist, $f$ must be defined in an open interval containing $c$. Other authors, like Rudin, do not explicitly require this yet mention the issue of differentiability at endpoints in passing. The motivation for this requirement is far beyond the scope of introductory calculus. See the answer by Willie Wong here: math.stackexchange.com/questions/126176/… – MathematicsStudent1122 Apr 2 '16 at 5:07
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419584262514, "lm_q2_score": 0.8723473763375644, "openwebmath_perplexity": 292.5421390087865, "openwebmath_score": 0.8134347200393677, "tags": null, "url": "https://math.stackexchange.com/questions/1721620/is-it-correct-to-say-all-extrema-happen-at-critical-points-but-not-all-critical" }
Yes. You can analyse this multi-variable example $z = x^2 - y^2$. This has a critical point which is not an extrema. These points called saddle points. • What's your function? What's your domain? – Roland Mar 31 '16 at 11:40 • @Roland $f: \mathbb R^2 \rightarrow \mathbb R$ with $z = f(x,y) = x^2 - y^2$. – crbah Mar 31 '16 at 11:44 • What if the domain is $\{(x,y): x^2 +y^2 \leq 1\}$? Is the answer still yes? – Roland Mar 31 '16 at 11:49 • @Roland Yes. You can see its graph on en.wikipedia.org/wiki/Saddle_point – crbah Mar 31 '16 at 11:53 • My comment for the restricted domain is not about the critical point which is not an extremum. It's about the extremal points which are not critical points ($(x,y)=(1,0)$ and $(x,y)=(0,1)$). – Roland Mar 31 '16 at 12:00
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419584262514, "lm_q2_score": 0.8723473763375644, "openwebmath_perplexity": 292.5421390087865, "openwebmath_score": 0.8134347200393677, "tags": null, "url": "https://math.stackexchange.com/questions/1721620/is-it-correct-to-say-all-extrema-happen-at-critical-points-but-not-all-critical" }
# Homework Help: Finding required resistor to reduce voltage 1. Sep 25, 2011 ### deus_ex_86 1. The problem statement, all variables and given/known data Your task is to design a "dummy" indicator light for your car. The cheapest bulb available is a #47 incandescent lamp rated at 6.3V for 150 mA. The problem is, your car produces 13.5 V when it's on. Choose the nearest 10% resistor that will reduce the voltage across the bulb to 6.3V. 2. Relevant equations V = IR 3. The attempt at a solution I modelled the bulb as a resistor: 6.3 V = .150 A * R Which gives R = 42 $\Omega$. From there, I plugged this value into a new circuit with a 13.5 V source instead of a 6.3 V source, and added the unknown resistor to the equation: 13.5 V = .150 A * R + 6.3 V 7.2 V = .150 A * R R = 48 $\Omega$. The nearest 10% resistor value is 47 $\Omega$, but this wouldn't reduce the voltage across the bulb to 6.3 V. The next closest 10% resistor is 56 $\Omega$. It just seems that this value is too high ... Did I follow this process correctly? What would you choose? 2. Sep 25, 2011 ### lewando 48-ohms look right. What are you getting for the new bulb voltage (with the 47-ohm R)? 3. Sep 25, 2011 ### deus_ex_86 48 ohms is the theoretical value, but I have to choose a 10% resistor to use. When I use the 47 ohm resistor, I end up with 6.37 V across the bulb. 4. Sep 25, 2011 ### lewando Close enough! [edit: To the letter of the question the solution does not exist. To the intent of the question, 47-ohms is a good answer, if you stipulate your voltage deviation, you should be fine.] 5. Sep 27, 2011 ### zgozvrm With a 47Ω resistor, the bulb will have approximately 6.37 volts across it. Light bulbs are not very picky, so this will work fine. With the 56Ω resistor, the bulb will only have about 5.79 volts across it. Still, a workable voltage, but it's the current that we're really concerned with...
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769056853638, "lm_q1q2_score": 0.851041955708975, "lm_q2_score": 0.8723473779969193, "openwebmath_perplexity": 1673.3263010992364, "openwebmath_score": 0.5311344265937805, "tags": null, "url": "https://www.physicsforums.com/threads/finding-required-resistor-to-reduce-voltage.533682/" }
The 47Ω resistor will limit the current through the bulb to about 151.69 mA which is just over the design rating. The bulb will burn slightly brighter (although probably not noticeably). The 56Ω resistor will limit the current down to about 137.76 mA and the bulb will burn slightly dimmer. You're only looking at about an 8% loss of current though, so either will work. I'd use the 47Ω resistor. 6. Sep 27, 2011 ### deus_ex_86 Thanks, so would my TA, as I got a 100% on the assignment. :) 7. Sep 27, 2011 ### 2milehi With ±10% resistors there could be a resistor that is 42.3Ω and car battery voltage can spike into the 14.0 volt range. That would cause 166 ma to flow through the bulb or about 10.7% more. Since power is I²R, there can be an extra 23% power dissipated through the bulb which will shorten its life. IMO the design would have had some breathing room built in if you went with the 56Ω bulb.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769056853638, "lm_q1q2_score": 0.851041955708975, "lm_q2_score": 0.8723473779969193, "openwebmath_perplexity": 1673.3263010992364, "openwebmath_score": 0.5311344265937805, "tags": null, "url": "https://www.physicsforums.com/threads/finding-required-resistor-to-reduce-voltage.533682/" }
### leetcode Question 102: Sqrt(x) Sqrt(x) Implement int sqrt(int x). Compute and return the square root of x. Analysis: According to Newton's Method(http://en.wikipedia.org/wiki/Newton's_method), we can use to get the sqrt(x). Code: class Solution { public: int sqrt(int x) { // Start typing your C/C++ solution below // DO NOT write int main() function if (x==0) {return 0;} if (x==1) {return 1;} double x0 = 1; double x1; while (true){ x1 = (x0+ x/x0)/2; if (abs(x1-x0)<1){return x1;} x0=x1; } } }; Another solution: the binary search approach is a more general way of solving this problem. One thing you need to consider is the length of the input, since taking the mid of a big value and computing its square may overflow the int type. We can use "long long" , which have a max value 2^63-1. The max of an int is 2^15-1 The max of a long is 2^31-1 class Solution { public: int sqrt(int x) { // Start typing your C/C++ solution below // DO NOT write int main() function long long high = x; long long low = 0; if (x<=0) {return 0;} if (x==1) {return 1;} while (high-low >1){ long long mid = low + (high-low)/2; if (mid*mid<=x){low = mid;} else {high = mid;} } return low; } }; 1. I believe in the interview, they won't expect you to remember Newtown's method. The solution they are expecting is a Binary Search method. The binary search approach is added to this post. 2. This comment has been removed by the author. 2. Quoting on one line above "The max of an int is 2^15-1", int in most cases is 32 bit, unless for some specific platform. 3. Is there a reason that you use mid = low + (high-low)/2; rather than (high + low)/2; ? 1. Yes, using high+low /2 may cause overflow errors, when high and low both are very big, high+low may exceed and cause the overflow error, while high-low will not. 2. This comment has been removed by the author.
{ "domain": "blogspot.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576914916509, "lm_q1q2_score": 0.851041955667598, "lm_q2_score": 0.8723473697001441, "openwebmath_perplexity": 3944.6968969118802, "openwebmath_score": 0.4936031103134155, "tags": null, "url": "http://yucoding.blogspot.com/2013/03/leetcode-question-102-sqrtx.html" }
2. This comment has been removed by the author. 3. Nice solution man. Although I think you need not worry about overflowing: both high and low are within the range [0 INT_MAX], so (high + low) is at most 2*INT_MAX, which can perfectly fit into a 'long long int' integer. So as long as you define high, low and mid as 'long long int', an overflow will never occur. Or better yet, even if both low and high are defined as 'int', we are still good. Note that it is guaranteed that (int)sqrt(x) < (int)(x/2) for any x >= 6, therefore, for any large enough number, in the first iteration, we have low == 0, so (low + high) is always <= INT_MAX (no overflow). then we always have mid*mid > x, so it is always 'high = mid;' that will be executed in the first iteration, i.e. high <= INT_MAX / 2. From there on, both low and high will be within [0 INT_MAX/2] so (low + high) will never exceed INT_MAX. Of course 'mid' must be defined as 'long long int' since we need to perform 'mid*mid', which may well exceed INT_MAX. But if we limit the initial value of 'high' to min(std::sqrt(INT_MAX) + 1, x/2), then it is safe to just define 'mid' as 'int'. std::sqrt(INT_MAX) is a constant that can be precomputed so it does not violate the requirement of this problem. 4. should u cast long to int int the end? 5. Decent improvement as (abs(x1-x0)<1) for Newton's Method.
{ "domain": "blogspot.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576914916509, "lm_q1q2_score": 0.851041955667598, "lm_q2_score": 0.8723473697001441, "openwebmath_perplexity": 3944.6968969118802, "openwebmath_score": 0.4936031103134155, "tags": null, "url": "http://yucoding.blogspot.com/2013/03/leetcode-question-102-sqrtx.html" }
$A \oplus B = A \oplus C$ imply $B = C$? I don't quite yet understand how $\oplus$ (xor) works yet. I know that fundamentally in terms of truth tables it means only 1 value(p or q) can be true, but not both. But when it comes to solving problems with them or proving equalities I have no idea how to use $\oplus$. For example: I'm trying to do a problem in which I have to prove or disprove with a counterexample whether or not $A \oplus B = A \oplus C$ implies $B = C$ is true. I know that the venn diagram of $\oplus$ in this case includes the regions of A and B excluding the areas they overlap. And similarly it includes regions of A and C but not the areas they overlap. It would look something like this: I feel the statement above would be true just by looking at the venn diagram since the area ABC is included in the $\oplus$, but I'm not sure if that's an adequate enough proof. On the other hand, I could be completely wrong about my reasoning. Also just for clarity's sake: Would $A\cup B = A \cup C$ and $A \cap B = A \cap C$ be proven in a similar way to show whether or not the conditions imply $B = C$? A counterexample/ proof of this would be appreciated as well. • For still more fun show that the power set of a set $X$, together with the xor operation, is a group. And, of course, groups have the cancellation law you ask about. – GEdgar Oct 23 '13 at 17:41 • The xor being, in that case, called the symmetric difference. @user101279 : By the way, $\oplus$ is addition modulo $2$. – xavierm02 Oct 23 '13 at 18:04 • Remark also that xor is the addition operation when we regard a Boolean algebra as essentially the same as a Boolean ring. – user43208 Oct 23 '13 at 18:05 • Think of $\oplus$ as $\neq$. – copper.hat Oct 23 '13 at 18:06 • If we omit the question about intersection and union in the last paragraph, this is the same as math.stackexchange.com/questions/294460/… – Martin Sleziak Aug 30 '14 at 15:30
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576910655981, "lm_q1q2_score": 0.8510419551885945, "lm_q2_score": 0.8723473730188542, "openwebmath_perplexity": 136.9304749100381, "openwebmath_score": 0.9999955892562866, "tags": null, "url": "https://math.stackexchange.com/questions/537172/a-oplus-b-a-oplus-c-imply-b-c" }
Think of $\oplus$ as $\neq$. That is $A \oplus B$ iff $A \neq B$. Note that $A \oplus A$ is always false, and $\text{False}\oplus A = A$. Then $A \oplus (A \oplus B) = (A \oplus A) \oplus B = \text{False} \oplus B = B$. Similarly, $A \oplus (A \oplus C) = C$, hence $B=C$. Aside: A 'cute' (as in amusing but not of any practical significance) use of $\oplus$ is to swap the values of two bit variables in a programming language without using an intermediate variable: \begin{eqnarray} x = y \oplus x \\ y = y \oplus x \\ x = y \oplus x \\ \end{eqnarray} Show that the values of $x,y$ are swapped! Hint: $A\oplus(A\oplus B)=(A\oplus A)\oplus B = B$. And of course $A\cup B=A\cup C$ does not imply $B=C$ (consider the case $B=A\ne \emptyset = C$). And $A\cap B=A\cap C$ does not imply $B=C$ either (consider the case $A=\emptyset$) Hint: $\oplus$ is associative with unit $\emptyset$, and $A \oplus A = \emptyset$. Does this give you an idea for canceling? This can be done using a simple calculation. But I don't know how to interpret your question, so I'll give two answers. :-) I'm assuming $\;A,B,C\;$ are booleans. I will write $\;\not\equiv\;$ instead of $\;\oplus\;$, and $\;\equiv\;$ instead of $\;=\;$ on booleans. First, note that $\;\equiv\;$ and $\;\not\equiv\;$ are not only both associative, but they are also mutually associative. Therefore no parentheses are needed in the following calculation. We can now simplify $\;A \oplus B = A \oplus C\;$ as follows: \begin{align} & A \not\equiv B \equiv A \not\equiv C \\ \equiv & \;\;\;\;\;\text{"rearrange"} \\ & A \not\equiv A \equiv B \not\equiv C \\ \equiv & \;\;\;\;\;\text{"simplify"} \\ & \text{false} \equiv B \not\equiv C \\ \equiv & \;\;\;\;\;\text{"simplify"} \\ & B \equiv C \\ \end{align} If instead $\;A,B,C\;$ are sets, and your $\;\oplus\;$ is the symmetric difference of two sets (which is normally written as $\;\triangle\;$), then the proof is slightly longer, but with essentially the same structure.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576910655981, "lm_q1q2_score": 0.8510419551885945, "lm_q2_score": 0.8723473730188542, "openwebmath_perplexity": 136.9304749100381, "openwebmath_score": 0.9999955892562866, "tags": null, "url": "https://math.stackexchange.com/questions/537172/a-oplus-b-a-oplus-c-imply-b-c" }
The simplest definition of symmetric difference is $$x \in A \oplus B \equiv x \in A \not\equiv x \in B$$ We can expand the definitions and simplify using logic, as follows: \begin{align} & A \oplus B = A \oplus C \\ \equiv & \;\;\;\;\;\text{"set extensionality; definition of $\;\oplus\;$, twice"} \\ & \langle \forall x :: x \in A \not\equiv x \in B \equiv x \in A \not\equiv x \in C \rangle \\ \equiv & \;\;\;\;\;\text{"logic: rearrange"} \\ & \langle \forall x :: x \in A \not\equiv x \in A \equiv x \in B \not\equiv x \in C \rangle \\ \equiv & \;\;\;\;\;\text{"logic: simplify"} \\ & \langle \forall x :: \text{false} \equiv x \in B \not\equiv x \in C \rangle \\ \equiv & \;\;\;\;\;\text{"logic: simplify"} \\ & \langle \forall x :: x \in B \equiv x \in C \rangle \\ \equiv & \;\;\;\;\;\text{"set extensionality"} \\ & B = C \\ \end{align} In both cases, we have found a stronger conclusion than was asked: we proved equivalence of the two expressions. Also just for clarity's sake: Would $A\cup B = A \cup C$ and $A \cap B = A \cap C$ be proven in a similar way to show whether or not the conditions imply $B = C$? A counterexample/ proof of this would be appreciated as well. $A\cup B=A\cup C$ $\Rightarrow$ $B=C$ is not true in general. Counterexample: Take any non-empty set $A$ and also take $B=A$ and $C=\emptyset$. Then $A\cup B=A\cup C=A$, but $B\ne C$. $A\cap B=A\cap C$ $\Rightarrow$ $B=C$ is not true in general. Take some element $x\notin A$ and put $B=A$, $C=A\cup\{x\}$. Then $A\cap B=A\cap C=A$, but $B\ne C$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576910655981, "lm_q1q2_score": 0.8510419551885945, "lm_q2_score": 0.8723473730188542, "openwebmath_perplexity": 136.9304749100381, "openwebmath_score": 0.9999955892562866, "tags": null, "url": "https://math.stackexchange.com/questions/537172/a-oplus-b-a-oplus-c-imply-b-c" }
# Biased binary search complexity We know Binary search on a set of n element array performs O(log(n)). We have this recursive equation through which the search space is reduced by half in each iteration, after a single comparison. T(n) = T(n/2) + 1 Either by applying Master's Theorem or analytically we arrive at the complexity of it as log(n) If I introduce bias, in the search by unequally partitioning the array instead of equal halves, How would the worst case time complexity change? The unequal partitions are t * n and (1-t) * n where 0<=t<=1 This is a mathematics question as it involves asymptotic analysis and hence I don't wish my question to be downvoted. • If I understand what you're asking correctly, wouldn't the worst case complexity be $O(n)$ due to it potentially using a partition size of $1$ and $n - 1$, requiring on average $\frac{n}{2}$ checks? Note it should never be worse than $O(n)$ as any type of biased binary search should only ever check each value at the most $1$ time. – John Omielan May 18 at 2:15 • Correct. But I want to come across with an analytical way to calculate the worst case complexity for any particular value of t. In general, it is evident that the worst case complexities are O(n) when t=0 and O(log(n)) when t=1/2. – Argha Chakraborty May 18 at 6:04 • There's nothing special about $t = \frac{1}{2}$. As long as you reduce the problem size by a factor of $t$, for any $0 \leq t < 1$, you will get logarithmic running time. – Joppy May 18 at 7:16 • That is what I want to prove . – Argha Chakraborty May 18 at 7:33 In the worst case scenarios, where $$t = 0$$ or $$t = 1$$, and assuming that at least one value will always be checked, then as I stated in my comment, the average number of checks would be $$\frac{n}{2}$$ and the maximum number would be $$n$$. However, for $$0 \lt t \lt n$$, where $$tn \gg 1$$, then as indicated in the comment by Joppy, the number of steps in the worst case scenario would be logarithmic.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576913496333, "lm_q1q2_score": 0.8510419544287112, "lm_q2_score": 0.8723473697001441, "openwebmath_perplexity": 269.13918222649215, "openwebmath_score": 0.8206067085266113, "tags": null, "url": "https://math.stackexchange.com/questions/3229985/biased-binary-search-complexity" }
To see this, let $$u = \max(t, 1-t) \tag{1}\label{eq1}$$ The worst case would be that the value is always in the larger partition after each step, with $$n_i$$ being the size of this partition after $$i$$ steps, and with $$n_0 = n$$. Thus, $$n_i = un_{i-1} \; \forall \; i \ge 1 \tag{2}\label{eq2}$$ As such, $$n_1 = un$$, $$n_2 = un_1 = u^2 n$$, etc., to get that $$n_i = u^i n \; \forall \; i \ge 0 \tag{3}\label{eq3}$$ The search will eventually end after $$m$$ steps where $$u^m n \approx 1 \tag{4}\label{eq4}$$ Note this is similar to the concept of the amount of time for exponential decay to cause a substance to eventually disappear, with the worst case number of unbiased binary searches being about the number of half-life periods. To use a value $$\gt 1$$ for logarithms, let $$v = \frac{1}{u} \tag{5}\label{eq5}$$ so \eqref{eq4} becomes $$n \approx v^m \; \implies \; m \approx \log_v(n) = \log_v(e)\ln(n) \tag{6}\label{eq6}$$ For $$t = u = \frac{1}{2}$$, then $$v = 2$$ and \eqref{eq6} gives $$m \approx (1.44269)\ln(n)$$. With a relatively extreme case of $$t = \frac{1}{1001}$$, then $$u = \frac{1000}{1001}$$ and $$v = 1.001$$, so \eqref{eq6} gives $$m \approx (1000.49991)\ln(n)$$, i.e., about $$693.5$$ times as large. Nonetheless, it's still generally considerably faster than just checking each value since, for $$n = 10^{12}$$, \eqref{eq6} gives $$m \approx (1000.49991)(27.63102) \approx 27644.8$$, so it's about $$\frac{10^{12}}{27644.8} \approx 3.61731 \times 10^7$$ times faster. The cases $$t=0$$ and $$t=1$$ do not correspond to a terminating algorithm. Otherwise, the behavior remains logarithmic. In the worst case, the array is every time reduced according to the smaller of $$t$$ and $$1-t$$; WLOG, let $$t$$. After $$k$$ steps, we approximately reduce from $$n$$ to $$nt^k\sim 1$$, so that $$k\sim-\log_tn$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576913496333, "lm_q1q2_score": 0.8510419544287112, "lm_q2_score": 0.8723473697001441, "openwebmath_perplexity": 269.13918222649215, "openwebmath_score": 0.8206067085266113, "tags": null, "url": "https://math.stackexchange.com/questions/3229985/biased-binary-search-complexity" }
# How to put a matrix in Jordan canonical form, when it has a multiple eigenvalue? Put the matrix $$\begin{bmatrix} 3 & -4\\ 1 & -1\end{bmatrix}$$ in Jordan Canonical Form. Moreover, find the appropriate transition matrix to the basis in which the original matrix assumes its Jordan form. I'm having a lot of trouble with this. I know that the eigenvalue has multiplicity two and is $\lambda = 1$. I can find the first eigenvector, which is: \begin{bmatrix} 2 \\ 1 \\ \end{bmatrix} I'm having trouble finding the second since both eigenvalues tell us the same thing. But I'm not nearly as concerned about the eigenvectors as I am about what to do after. If anyone could explain thoroughly the next steps involved (not necessarily the answer but how to obtain it), I would be forever grateful. This homework is in 2 days and it may determine my grade letter. • Yes, I know the only eigenvalues are one. The problem is finding the eigenvectors corresponding to 1 and more importantly how to proceed. – Arbitrationer Nov 23 '14 at 5:43 • You don't seem to understand that when vadim wrote, "the eigenspace corresponding to $\lambda=1$ is one-dimensional", he was telling you there is only the one eigenvector that you have found (and scalar multiples of that eigenvector, but they don't help). What you need is a generalized eigenvector; a vector $w$ such that $(A-\lambda I)w=v$, where $v$ is the eigenvector you have found. Do you not have a text, or some notes to follow? Anyway, "generalized eigenvector" is the keyphrase: go look it up, and then post an answer when you have worked out how to solve the problem. – Gerry Myerson Nov 23 '14 at 6:16
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419454756232, "lm_q2_score": 0.8723473630627235, "openwebmath_perplexity": 156.8381932586146, "openwebmath_score": 0.9571903944015503, "tags": null, "url": "https://math.stackexchange.com/questions/1034665/how-to-put-a-matrix-in-jordan-canonical-form-when-it-has-a-multiple-eigenvalue" }
To put a matrix in Jordan normal form requires to know three matrices such that $A=PJP^{-1}$ i.e: a matrix $P^{-1}$ that transform to the canonical basis $(\mathbf{i},\mathbf{j},\mathbf{k})$ to a new basis in which the matrix $J$ represents the transformation of a vector $\mathbf{v}$ such that the transformed vector $\mathbf{v'}$ is the same as we find when we transform $\mathbf{v}$ with $A$ in the canonical basis. Last the matrix $P$ returns this result to the canonical basis. As noted in OP the matrix $A$ has eigenvalues $\lambda_1=\lambda_2=1$ and a single eigenvector $$\mathbf{u_1}=\left[ \begin{array}{cccc} 2\\ 1 \end {array} \right]$$ So the main problem is to find another vector that completes the new basis. To find such a vector notes that all vectors $\mathbf{x}$ such that $(A-\lambda I)\mathbf{x}=0$ are transformed in the eigenspace generated by the eigenvector $\mathbf{u_1}$, so we want a vector $\mathbf{u_2}$ such that $(A-\lambda I)\mathbf{u_2} \ne 0$, and the way to do this is to find a vector such that : $(A-\lambda I)\mathbf{u_2} = \mathbf{u_1}$. (Note that this equation is the same as $(A-\lambda I)^2\mathbf{u_2}= 0$). Solving in our case we find: $$\left[ \begin{array}{cccc} 2&-4\\ 1&-2 \end {array} \right] \left[ \begin{array}{cccc} x\\ y \end {array} \right]= \left[ \begin{array}{cccc} 2\\ 1 \end {array} \right]$$ so that the components $x$ and $y$ of the searched vector must satisfies $x-2y=1$, and we can find the vector $$\mathbf{u_2}= \left[ \begin{array}{cccc} 1\\ 0 \end {array} \right]$$ So the matrix $P$ we are searching is $$P=[\mathbf{u_1},\mathbf{u_2}]= \left[ \begin{array}{cccc} 2&1\\ 1&0 \end {array} \right]$$ And the inverse is: $$P^{-1}= \left[ \begin{array}{cccc} 0&1\\ 1&-2 \end {array} \right]$$ The matrix $J$ is a typical Jordan block, with the eigenvalues as diagonal elements and an entry $1$ up-right them: $$J= \left[ \begin{array}{cccc} 1&1\\ 0&1 \end {array} \right]$$ and we can easily verify that $A=PJP^{-1}$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419454756232, "lm_q2_score": 0.8723473630627235, "openwebmath_perplexity": 156.8381932586146, "openwebmath_score": 0.9571903944015503, "tags": null, "url": "https://math.stackexchange.com/questions/1034665/how-to-put-a-matrix-in-jordan-canonical-form-when-it-has-a-multiple-eigenvalue" }
The characteristic polynomial $\det (A - \lambda I) = (\lambda -1)^2.$ when the dimension of the null space(1) of an eigenvalue($\lambda = 1$) is less than the algebraic multiplicity(2), you need to find generalized eigenvectors. in this instance, you need to solve $(A - I_2) \left( \begin{array}{l} x \cr y\end{array} \right) = \left( \begin{array}{l} 2 \cr 1 \end{array} \right).$ this gives you $x = 1, y = 0$ with respect to the basis $\{ \left( \begin{array}{l} 2 \cr 1 \end{array} \right), \left( \begin{array}{l} 1 \cr 0 \end{array} \right) \}$ you transformation is represented by the Jordan canonical form $\left( \begin{array}{ll} 1 & 1 \cr 0 & 1 \end{array} \right).$ • Might have been better to let OP do some of the work. – Gerry Myerson Nov 23 '14 at 23:27 • @myerson, perhaps i should have not done all the work. – abel Nov 23 '14 at 23:57
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769106559808, "lm_q1q2_score": 0.8510419454756232, "lm_q2_score": 0.8723473630627235, "openwebmath_perplexity": 156.8381932586146, "openwebmath_score": 0.9571903944015503, "tags": null, "url": "https://math.stackexchange.com/questions/1034665/how-to-put-a-matrix-in-jordan-canonical-form-when-it-has-a-multiple-eigenvalue" }
# Can a gradient vector field with no equilibria point in every direction? Suppose that $V:\mathbb{R}^n \to \mathbb{R}$ is a smooth function such that $\nabla V : \mathbb{R}^n \to \mathbb{R}^n$ has no equilibria (i.e. $\forall x \in \mathbb{R}^n : \nabla V (x) \not = 0$). Under these hypotheses, is it possible that $\nabla V (x)$ can point in every direction? To be more precise, under the above hypotheses the map $$\mathbb{R}^n \to \mathbb{S}^{n-1}$$ $$x \mapsto \frac{\nabla V(x)}{\|\nabla V (x)\|}$$ is well-defined. Is it impossible for such a map to be surjective? If not, what is a counterexample? • I believe this is possible from $R^2$ onwards. Can you write down a function for which one gradient flow line makes an $S$ shape? Can you make sure that at some distance from this $S$ the flowlines it straighten to vertical straight lines? If you can do that you can mirror this construction and obtain the right function. – Thomas Rot Mar 15 '16 at 14:18 • That's an interesting idea. It seems plausible, but I'm having some trouble actually writing down such a function. – Matthew Kvalheim Mar 15 '16 at 16:55 • The more I think about it, I'm not sure a function satisfying the requirements @ThomasRot suggested exists. – Matthew Kvalheim Mar 15 '16 at 23:57 • A cool question! Also, it might be received well in MathOverflow, but given that you started a bounty I would think it better to let the bounty period expire before we consider migration. If you are in a hurry, respond to this and/or flag again. – Jyrki Lahtonen Mar 19 '16 at 6:53 A quick solution for $n=2$, and an explanation of where it came from: $$\nabla (e^x \sin y) = (e^x \sin y, e^x \cos y).$$ The right hand side is never zero, but does assume every nonzero value in $\mathbb{R}^2$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446486833801, "lm_q1q2_score": 0.8510406225542377, "lm_q2_score": 0.8740772351648677, "openwebmath_perplexity": 282.0459992361244, "openwebmath_score": 0.9861288070678711, "tags": null, "url": "https://math.stackexchange.com/questions/1696134/can-a-gradient-vector-field-with-no-equilibria-point-in-every-direction" }
Motivation: If $f: \mathbb{C} \to \mathbb{C}$ is holomorphic, then $\nabla(\mathrm{Re}(f)) = (\mathrm{Re}(f'), -\mathrm{Im}(f'))$. I looked for an $f'$ (namely $e^z$) which takes $\mathbb{C}$ onto $\mathbb{C}_{\neq 0}$, and then integrated it to find $f$. Inspired by this, let's try $$\nabla(e^{x_0} \cos(x_1^2+x_2^2+\cdots + x_n^2)) =$$ $$e^{x_0} (\cos(x_1^2+\cdots + x_n^2), - x_1 \sin(x_1^2+\cdots + x_n^2), \cdots, -x_n \sin(x_1^2+\cdots + x_n^2)).$$ First, we note that the gradient is not zero. The first coordinate only vanishes if $x_1^2+ \cdots + x_n^2$ is of the form $(2k+1) \pi$. But, in this case, at least one of $x_1$, $x_2$, ..., $x_n$ are nonzero; say $x_j$. Then $- x_j \sin(x_1^2+\cdots + x_n^2) = \pm x_j \neq 0$. Now, let's take a nonzero vector $(v_0, \ldots, v_n)$. We need to go through cases. If $v_0 = 0$, choose $(x_1, \ldots, x_n)$ proportional to $(v_1, \ldots, v_n)$ and such that $\sin(x_1^2+\cdots+x_n^2)$ has the right sign. If $v_1 = \cdots = v_n = 0$ and $(-1)^k v_0>0$, take $x_1^2+\cdots+x_n^2 = k \pi$. If neither of those cases holds, we'll take $(x_1, \ldots, x_n)$ of the form $(t v_1, \ldots, t v_n)$ for some $t$ to be determined soon. Set $s = x_1^2+ \cdots + x_n^2$. Then our vector points in direction $\pm (- \cot(t^2 s), v_1, v_2, \ldots, v_n)$. Since cotangent is surjective, we can choose $t$ such that $\cot (t^2 s) = v_0$, and we can get the sign right. • Thank you for the insightful answer. Your counterexample for $\mathbb{R}^2$ and the motivation you gave are quite helpful. – Matthew Kvalheim Mar 24 '16 at 22:31 It is possible to have a vector field without equilibrium point and takes every possible directions in $\mathbb{R^n}$ for any $n \ge 2$. The proof for the 2-dimension case is given below. From that, vector fields for higher dimension cases can be constructed. Let $\rho : (-\epsilon,\infty) \to \mathbb{R}$ be any $C^\infty$ function such that
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446486833801, "lm_q1q2_score": 0.8510406225542377, "lm_q2_score": 0.8740772351648677, "openwebmath_perplexity": 282.0459992361244, "openwebmath_score": 0.9861288070678711, "tags": null, "url": "https://math.stackexchange.com/questions/1696134/can-a-gradient-vector-field-with-no-equilibria-point-in-every-direction" }
Let $\rho : (-\epsilon,\infty) \to \mathbb{R}$ be any $C^\infty$ function such that 1. $\rho(t)$ is even over $(-\epsilon,\epsilon)$ and $\rho(0) = \rho'(0) = 0$. 2. $|r\rho'(r)| < 1$ for all $r \ge 0$. 3. for some $r_c > 0$, $\rho(r_c) = 2\pi$. 4. for some $r_m > 0$, $N \in \mathbb{Z}$, $\rho(r) = 2\pi N$ for all $r \ge r_m$. For any point $(x,y) \in \mathbb{R}^2$, let $(r,\theta)$ be the corresponding polar coordinates. Let $\hat{e}_x, \hat{e}_y, \hat{e}_r, \hat{e}_\theta$ be unit vectors along the $x$, $y$, $r$ and $\theta$ directions respectively. Consider following function $$U(x,y) = r\cos(\theta - \rho(r)) = x\cos\rho(r) + y\sin\rho(r)$$ Its gradient is given by: \begin{align} \vec{\nabla} U &= \hat{e}_x \left( \cos\rho + \frac{x\rho'}{r}( -x \sin\rho + y\cos\rho )\right) + \hat{e}_y \left( \sin\rho + \frac{y\rho'}{r}( -x \sin\rho + y\cos\rho )\right)\\ &= \hat{e}_r\left(\cos(\theta - \rho) + r\rho'\sin(\theta-\rho)\right) -\hat{e}_\theta\left(\sin(\theta-\rho)\right) \end{align} From these two expressions, it is easy to deduce • $\vec{\nabla} U$ is well defined at $(0,0)$ because $\rho'(0) = 0$. • $\vec{\nabla} U(0,0) = \hat{e}_x$ because $\rho(0) = \rho'(0) = 0$. • $\vec{\nabla} U \ne 0$ for $r > 0$ because \begin{align} |\vec{\nabla} U| &= |\hat{e}_r\left(\cos(\theta - \rho) + r\rho'\sin(\theta-\rho)\right) -\hat{e}_\theta\left(\sin(\theta-\rho)\right)|\\ &\ge |\hat{e}_r\left(\cos(\theta - \rho)\right) -\hat{e}_\theta\left(\sin(\theta-\rho)\right)| - |\hat{e}_e \left(r\rho'\sin(\theta-\rho)\right)|\\ &\ge 1 - |r\rho'| > 0 \end{align} Combine these, we find $\nabla U \ne 0$ over $\mathbb{R}^2$. This allow us to define following unit vector field globally on $\mathbb{R}^2$. $$\hat{u}(x,y) = \frac{\vec{\nabla}U(x,y)}{\left|\vec{\nabla}U(x,y)\right|}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446486833801, "lm_q1q2_score": 0.8510406225542377, "lm_q2_score": 0.8740772351648677, "openwebmath_perplexity": 282.0459992361244, "openwebmath_score": 0.9861288070678711, "tags": null, "url": "https://math.stackexchange.com/questions/1696134/can-a-gradient-vector-field-with-no-equilibria-point-in-every-direction" }
$$\hat{u}(x,y) = \frac{\vec{\nabla}U(x,y)}{\left|\vec{\nabla}U(x,y)\right|}$$ Over the curve $\displaystyle\;\gamma : [0,r_c] \ni t \quad\mapsto\quad (t\cos\rho(t),t\sin\rho(t)) \in \mathbb{R}^2\;$, we have $$\hat{u}(t) = \hat{e}_r = \hat{e}_x \cos\rho(t) + \hat{e}_y\sin\rho(t)$$ As we move from $\gamma(0)$ to $\gamma(r_c)$ along $\gamma$, $\hat{u}(\gamma(t))$ will cover the whole $S^1$ at least once. The map $\hat{u}$ is surjective on $\gamma$ and hence on $B(0,r_c)$ and $\mathbb{R}^2$. This means $U$ is a counter-example of $V$ for the 2-dimenisonal case. As an illustration, following is a picture of what $U(x,y)$ will typically look like under this construction. $\hspace0.75in$ This particular $U(x,y)$ is computed using \rho(t) = 2\pi + (f(t)-2\pi)g(t) \quad\text{ where }\quad \begin{align} f(t) &= \log\sqrt{(e^{4\pi}-1)t^2 + 1}\\ g(t) &= \begin{cases} 1, & t \le 1\\ 1 - e^{-1/(e^{t-1} -1)}, & t > 1\end{cases} \end{align} and plotted over the region $|x|,|y| \le 2$. This $\rho(t)$ satisfies the first three requirement in the beginning of this answer with $r_c = 1$. For $r \gg r_c$, $\rho(t)$ tends to $2\pi$ exponentially as $t \to \infty$. Back to the case of higher dimensions. Let's say we do have a $\rho$ that satisfies all four conditions. Consider following function: $$V(x_1,x_2,\ldots,x_n) \stackrel{def}{=} U\left( x_1, \sqrt{x_2^2 + x_3^2 + \cdots + x_n^2} - (r_m+1)\right)$$ Since $B(0,r_m) \supset B(0,r_c)$, the map $\frac{\vec{\nabla}V}{|\vec{\nabla}V|}$ will be surjective over the hyper-torus $$\bigg\{ (x_1,\ldots,x_n) : x_1^2 + \left(\sqrt{x_2^2 + x_3^2 + \cdots + x_n^2} - (r_m+1)\right)^2 \le r_m^2 \bigg\}$$ Since $U(x,y) = x$ whenever $x^2 + y^2 \ge r_m^2$, $V(x_1,\ldots,x_n)$ equals to $x_1$ over the complement of the hyper-torus. As that contains the $x_1$-axis, the square root in the definition of $V$ won't cause any problem. This makes $V$ smooth everywhere. Finally, it is easy to check $\nabla V$ never vanishes. Update
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446486833801, "lm_q1q2_score": 0.8510406225542377, "lm_q2_score": 0.8740772351648677, "openwebmath_perplexity": 282.0459992361244, "openwebmath_score": 0.9861288070678711, "tags": null, "url": "https://math.stackexchange.com/questions/1696134/can-a-gradient-vector-field-with-no-equilibria-point-in-every-direction" }
Update About the question how this particular form of $\rho(t)$ is chosen. First, $\rho(r)$ cannot change too rapidly or $\nabla U$ may become zero somewhere. Second, we need $\rho(r)$ span across around a large enough interval to make $\hat{u}(r)$ surjective. This suggest us to choose a $\rho(r)$ with $\rho'(r)$ as close to $\frac{1}{r}$ as possible. This means $$\rho(r) \sim \log r + constant.$$ In order for $U(x,y)$ to be smooth at the origin, I replace $\log r$ by $\log\sqrt{(e^{4\pi}-1)r^2 + 1}$. The constants are chosen so that when $r$ changes from $0$ to $1$, $\rho(r)$ pick up a change of $2\pi$ which we need. Finally, we want $\rho(r)$ decay back to a constant multiple of $2\pi$ for large $r$ while keeping smooth all the way. We need this in order to extend the result to higher dimensions. This can be done using a smooth Partition of unity. The function $g(r)$ in the formula doesn't really have any specific meaning. It is simply something from trial and error which works. • May I ask how you came up with the particular example of $\rho$ you gave? I'd like to understand the intuition behind your thought process. – Matthew Kvalheim Mar 21 '16 at 21:43 • @MatthewKvalheim answer updated, see rationale of chosing such a $\rho$. – achille hui Mar 21 '16 at 22:25 • You wrote an expression for $\nabla U$ that involves division by $r$. Am I correct in the interpretation that you actually mean that $\nabla U$ is defined by the expression you wrote on $\mathbb{R}^2\setminus \{0\}$, and $\hat{e}_x$ at the origin? – Matthew Kvalheim Mar 22 '16 at 3:49 • Thank you very much for your help. – Matthew Kvalheim Mar 22 '16 at 3:58 • @MatthewKvalheim $\nabla U$ is defined as the gradient of $U$. What I mean is it "equals to" the expression involves division by $r$ for $r \ne 0$ and $\hat{e}_x$. – achille hui Mar 22 '16 at 9:15
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446486833801, "lm_q1q2_score": 0.8510406225542377, "lm_q2_score": 0.8740772351648677, "openwebmath_perplexity": 282.0459992361244, "openwebmath_score": 0.9861288070678711, "tags": null, "url": "https://math.stackexchange.com/questions/1696134/can-a-gradient-vector-field-with-no-equilibria-point-in-every-direction" }
# Rationalizing denominator of $\frac{18}{\sqrt{162}}$. Cannot match textbook solution I am given this expression and asked to simplify by rationalizing the denominator: $$\frac{18}{\sqrt{162}}$$ The solution is provided: $$\sqrt{2}$$ I arrived at: $$\frac{\sqrt{162}}{9}$$ Here is my thought process to arrive at this incorrect answer: $$\frac{18}{\sqrt{162}}$$ = $$\frac{18}{\sqrt{162}}$$ * $$\frac{\sqrt{162}}{\sqrt{162}}$$ = $$\frac{18\sqrt{162}}{162}$$ = $$\frac{\sqrt{162}}{9}$$ How can I arrive at $$\sqrt{2}$$ ? • Hint: $162=2\cdot 81$. Jan 3 '19 at 4:52 • $162=2*81=2*9^2$ so $\sqrt {162}=\sqrt {2*9^2}=9\sqrt 2$. If your hadn't "deradicalized" the denominator you would have ended up with $\frac 2 {\sqrt 2}$ which is also deradicalized as $\sqrt 2$. Jan 3 '19 at 5:49 $$\frac{\sqrt{162}}{9} = \frac{\sqrt{2 \cdot 9^2}}{9} = \frac{9\sqrt{2}}{9} = \sqrt{2}$$. $$\require{cancel}\frac{18}{\sqrt{162}}=\frac{2\cdot3^2}{\sqrt{2\cdot3^4}}=\frac{2\cdot\cancel{3^2}}{\sqrt{2}\cdot\cancel{3^2}}=\frac{2}{\sqrt{2}}\cdot\frac{\sqrt{2}}{\sqrt{2}}=\frac{\cancel2\sqrt{2}}{\cancel{2}}=\sqrt{2}$$ Your thought process is good. But just continue with factorizing $$162=2*81=2*3^4$$. So $$\sqrt {162}=\sqrt {2*3^4}=\sqrt {2}\sqrt {3^4}=\sqrt 2*3^2=9\sqrt 2$$ and from there.... it's just mechanics. $$\sqrt{162}$$ needs to be simplified further, as $$162$$ is the product containing a perfect square (i.e. $$81$$). Thus $$\sqrt{162} = \sqrt{2 \cdot 81} = \sqrt{2}\sqrt{81} = 9\sqrt{2}$$ and hence $$\frac{\sqrt{162}}{9} = \frac{9\sqrt{2}}{9} = \sqrt{2}$$ Alternatively if $$a = \frac{18}{\sqrt{162}}$$, $$a^2 = \frac{18^2}{\sqrt{162}^2} = \frac{324}{162} = 2$$. Since $$a$$ is a positive number, $$a = \sqrt{2}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446498305048, "lm_q1q2_score": 0.8510406123786919, "lm_q2_score": 0.8740772236840656, "openwebmath_perplexity": 521.4923752147245, "openwebmath_score": 0.9871659278869629, "tags": null, "url": "https://math.stackexchange.com/questions/3060263/rationalizing-denominator-of-frac18-sqrt162-cannot-match-textbook-solu" }
# How do you solve an equation like this? We have an equation: $\dfrac{p-1}{q} = \dfrac{q-1}{2p+1} = \dfrac {3}{5}$ How do you solve these types of equations? For example, if we have: $\dfrac{1}{x} = \dfrac{3}{2}$, we use: $1\times 2 = 3\times x$ $x = 1.5$ What is a similair approach to my equation? - Yes, exactly. First forget about the middle term. Next, forget about the $\frac35$. And then you're done. –  Berci Nov 1 '12 at 19:29 Essentially, you can solve this using the process you describe, but twice, to generate two equations in two variables, and then solving for each variable as a "system of equations". We have: $$\dfrac{p-1}{q} = \dfrac{q-1}{2p+1} = \dfrac {3}{5}$$ Equation 1: $\quad 5(p-1) = 3q \iff 5p - 3q = 5$ Equation 2: $\quad 5(q-1) = 3(2p +1) \iff -6p +5q=8$ So your system of two equations in two unknowns becomes: $$5p - 3q = 5\tag{1}$$ $$-6p +5q=8\tag{2}$$ Can you take it from there? You can express (1) as a function of p (isolate p), and then substitute the expression obtained for p, into p in (2), and then solve for q, then p, or You can use "row operations": multiply (1) by 5 (both sides), and (2) by 3 (both sides): $$25p-15q=25\tag{1}$$ $$-18p+15q = 24\tag{2}$$ Now add the equations (q disappears), solve for p, then "plug" p into one of the original equations and solve for q: $$7p = 49n\implies p = 7$$ Now...From (1), originally, above $$5p-3q=5 \implies 5(7) - 3q = 5\implies 35 - 3q = 5$$ $$\implies -3q=-30 \implies q = 10$$ - Thank you sir, I am familair with row operations so I won't have trouble with these types of equations again. –  JohnPhteven Nov 1 '12 at 19:49 (+1) Step by step getting the answer. –  Babak S. Aug 6 '13 at 10:15 You have two equation with two variables: $$1) \quad \frac{p-1}{q} = \frac{3}{5}$$ and $$2) \frac{q-1}{2p+1}=\frac{3}{5}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9835969679646668, "lm_q1q2_score": 0.8510317927785007, "lm_q2_score": 0.865224091265267, "openwebmath_perplexity": 461.809007154515, "openwebmath_score": 0.8843836188316345, "tags": null, "url": "http://math.stackexchange.com/questions/226970/how-do-you-solve-an-equation-like-this" }