category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
linear-algebra
|
Connection between eigenvalues of a real matrix A and its norm.
|
https://math.stackexchange.com/questions/3721304/connection-between-eigenvalues-of-a-real-matrix-a-and-its-norm
|
<p>Is there a connection among norm of a matrix, its eigenvalues and the image of <span class="math-container">$A$</span>? Specifically if all eigenvalues of a matrix <span class="math-container">$A$</span> (<span class="math-container">$n$</span> by <span class="math-container">$n$</span>) have absolute value less than one, can we find a vector <span class="math-container">$v$</span> in <span class="math-container">$ \mathbb{R}^n $</span> with <span class="math-container">$||Av||>||v||$</span>?
A simple example of a <span class="math-container">$2$</span> by <span class="math-container">$2$</span> matrix I am unable to get. Please help.</p>
|
<p>Let
<span class="math-container">$$A = \begin{pmatrix}1/2 & 1 \\ 0 & 1/2\end{pmatrix}$$</span>
Then <span class="math-container">$1/2$</span> is the only eigenvalue of <span class="math-container">$A$</span>. If <span class="math-container">$v = \begin{pmatrix}1 \\ 1\end{pmatrix}$</span>, then <span class="math-container">$Av = \begin{pmatrix} 3/2 \\ 1/2\end{pmatrix}$</span>, hence <span class="math-container">$\|Av\| = \sqrt{10}/2 > \sqrt{2} = \|v\|$</span>.</p>
| 0
|
linear-algebra
|
Show that every function $f\ \in \mathcal{F}(F,F)$ is uniquely a polynomial of degree $\leq q−1$ - (Approach)
|
https://math.stackexchange.com/questions/3722183/show-that-every-function-f-in-mathcalff-f-is-uniquely-a-polynomial-of-d
|
<p>Let <span class="math-container">$F$</span> be a finite field with <span class="math-container">$q$</span> elements. Show that every function <span class="math-container">$f \in \mathcal{F}(F,F)$</span> is uniquely a polynomial of degree <span class="math-container">$\leq q−1$</span> with coefficients in <span class="math-container">$F$</span>. That is, <span class="math-container">$\mathcal{F}(F,F)=Pol_{q−1}(F)$</span></p>
<p>This is a bonus question that I found on my school's website framed in terms of Linear Algebra. I've been working through <em>Friedberg,Insel,Spence's - Linear Algebra</em> for reference of the level that I'm at in terms of approaching this problem.</p>
<p>I did a search online for a solution and everything I've found to date revolves around Group Theory, not what I had in mind at this point.</p>
<p>From the tools that I have at my disposal up to this point the one technique that jumps out to me is somehow using the Lagrange Interpolation Formula, from the example and the exercises I've done involving it though it seems to be limited to getting a unique representation of a polynomial function. But what this question is asking is a representation for any function, so that is beyond just polynomials. Is there a way to use the Lagrange Formula or am I stuck until I learn some more tools?</p>
|
<p>One way of approaching this question is to analyse the natural map <span class="math-container">$\phi:F[x]\rightarrow \mathcal{F}(F,F)$</span> which interprets a polynomial as a function from <span class="math-container">$F$</span> to <span class="math-container">$F$</span>.</p>
<p>We want to say that this map, when restricted to polynomials of degree <span class="math-container">$\leq q-1$</span> is a bijection. For this, lets note that if <span class="math-container">$\phi(p(x))=\phi(q(x))$</span>, then we have that <span class="math-container">$\phi(p(x)-q(x))$</span> is the zero function. So if we have that <span class="math-container">$p(x)-q(x)$</span> is nonzero, and yields the zero function once interpreted, then its a polynomial that has each <span class="math-container">$\alpha\in F$</span> as a root, so has at least degree <span class="math-container">$q$</span>, since we have at least <span class="math-container">$q$</span> distinct roots.</p>
<p>This observation tells us that if <span class="math-container">$p(x)$</span>, <span class="math-container">$q(x)$</span>, are two distinct polynomials of degree <span class="math-container">$\leq q-1$</span>, then <span class="math-container">$\phi(p(x))\neq \phi(q(x))$</span>, so our map <span class="math-container">$\phi$</span> is injective when we restrict to this subspace. But now we can count, there are <span class="math-container">$q^q$</span> polynomials with coefficients in <span class="math-container">$F$</span> of degree <span class="math-container">$\leq q-1$</span>, and there are <span class="math-container">$q^q$</span> functions from <span class="math-container">$F$</span> to <span class="math-container">$F$</span>. So we have an injective map between finite sets of the same cardinality, so it must be surjective, and therefore bijective, giving the desired result.</p>
| 1
|
linear-algebra
|
Something is counted wrongly in a determinant of block matrices
|
https://math.stackexchange.com/questions/3722339/something-is-counted-wrongly-in-a-determinant-of-block-matrices
|
<p>I am counting something wrongly as I am looking at the determinant of a block matrix.</p>
<p>Let us consider this example:
<span class="math-container">$$
M = \begin{pmatrix} a & 1^T \\ 1 & I_{n-1}\end{pmatrix}
$$</span>
where <span class="math-container">$a\neq 0$</span> is scalar, <span class="math-container">$1$</span> is a <span class="math-container">$n-1$</span>-dimensional vector of ones and <span class="math-container">$I_{n-1}$</span> is identity of dimension <span class="math-container">$(n-1)\times (n-1)$</span> (will write I).</p>
<p>Now we know that <span class="math-container">$\left|\begin{matrix} A & B \\ C & D\end{matrix}\right|= \det(A-BD^{-1}C)\det(D)$</span> if <span class="math-container">$D$</span> is invertible.</p>
<p>So let us consider <span class="math-container">$\det(\lambda I_n-M)=0$</span> and we get:
<span class="math-container">\begin{align}
\left|\lambda I - \begin{pmatrix} a & 1^T \\ 1 & I\end{pmatrix}\right|&=\left|\begin{matrix}\lambda -a & -1^T \\ -1 & (\lambda - 1)I\end{matrix}\right|\\[2ex]
&\stackrel{(\lambda \neq 1)}{=} \left|\lambda-a - (1/(\lambda-1)(n-1)\right||(\lambda-1)I|=0\qquad (*)
\end{align}</span>
Now here <span class="math-container">$\lambda\neq 1$</span> and thus the second factor can never be zero, so the equation is true only if the first scalar is zero, and one can compute that. It results in a second order equation.
For <span class="math-container">$\lambda=1$</span> we get that the determinant of <span class="math-container">$(\lambda I_n-M)$</span> is indeed zero.</p>
<p>So then it seems that we get two solutions from the first term in <span class="math-container">$(*)$</span> and a solution <span class="math-container">$\lambda=1$</span> with multiplicity <span class="math-container">$n-1$</span> from the second case. This is obviously wrong. I am counting something (an eigenvalue 1) an extra time.</p>
<p>I am baffled as I cannot see where I am failing.</p>
| 2
|
|
linear-algebra
|
Dimension and sequences
|
https://math.stackexchange.com/questions/16490/dimension-and-sequences
|
<p>Consider the linear space $c^{(3)}$ of all sequences $x = (x_n)_{n=1}^{\infty}$ such that $\{x_{3k+q} \}_{k=0}^{\infty}$ converges for $q = 0,1,2$. Find the dimension and a basis for $c^{(3)}/c_0$. Note that $c_0$ is the linear space of sequences that converge to $0$.</p>
<p>I think the dimension is $1$ using the reasoning in the answer to this <a href="https://math.stackexchange.com/questions/16296/codimension-and-sequences">question</a>. We know that any integer can be represented as $3k, 3k+1$ or $3k+2$. So $c^{(3)}$ is equivalent to $c$ (e.g. all convergent sequences)? Is this correct? </p>
|
<p>Notice that $c^{(3)}\cong c\oplus c \oplus c$ where $c$ is the space of convergent sequences, while $c_0 \cong c_0 \oplus c_0 \oplus c_0$ (both isomorphisms are separating the sequence to its three sub sequences modulo 3).</p>
<p>Also, in the represetation above of $c^{(3)}$ and $c_0$ we get that each copy of $c_0$ is inside a copy of $c$ so $c^{(3)}/c_0 \cong (c/c_0) \oplus (c/c_0) \oplus (c/c_0)$</p>
| 3
|
linear-algebra
|
Find a T from $R^3$ to $R^4$ given an equation for a subspace in $R^4$
|
https://math.stackexchange.com/questions/19335/find-a-t-from-r3-to-r4-given-an-equation-for-a-subspace-in-r4
|
<p>A subspace V in $R^4$ is defined by the equation $x_{1}$-$x_{2}$+$2x_{3}$+$4x_{4}$=0. I need to find T such that Ker(T)=zero vector, and Im(T)=V. How do I approach this problem? As I understand, the equation given to me is a set of points that are solutions to Im(T)=V, so in a sense they vectors of that plane are elements of Ker(V), but I can't go any further. Thanks.</p>
|
<p>I am going to treat "matrix" and "linear transformation" as synonyms in this answer, because we are working in $\mathbb{R}^n$ and have the standard basis at our disposal. </p>
<p>Rephrasing your problem: you are given a subspace $V$ of $\mathbb{R}^4$, defined by a homogeneous linear equation. The problem is to find a matrix $M$ whose nullspace consists only of the zero vector and whose column space is $V$.</p>
<p>Here are some useful tools to have in your toolbox:</p>
<p>(1) Given a subspace $W$ of $\mathbb{R}^n$ defined by a system of $m$ homogeneous linear equations, write down a matrix $A$ whose nullspace is precisely $W$.</p>
<p>[Solution, making use only of the definitions of the words and matrix multiplication: rewrite the system of homogeneous linear equations $\sum_{j=1}^n a_{ij} x_j = 0$, $1 \leq i \leq m$, as a single matrix equation $A x = 0$, where $A$ is the matrix whose row $i$, column $j$ entry is $a_{ij}$, and where $x$ is the column vector whose $i$th entry is $x_i$. The matrix $A$ will do.]</p>
<p>(2) Given an $m \times n$ matrix $A$, find a basis for the nullspace of $A$.</p>
<p>[Solution: somewhat hard to express in words if you haven't seen it--- look in a textbook. The standard way this is done is to solve $Ax = 0$ using Gaussian elimination, to introduce a "free variable" for every non-pivot variable column, and thus to write the general solution to $Ax = 0$ in terms of "free variables". If you rewrite this general solution in vector form--- as a linear combination of fixed numerical vectors, with the coefficients the free variables--- a basis falls right out.]</p>
<p>(3) Given a basis $v_1, \dots, v_k$ for a subspace $W$ of $\mathbb{R}^m$, find a matrix $T$ whose nullspace is $\{0\}$ and whose range is $W$.</p>
<p>[Solution, based only on the definitions of the words and matrix multiplication: write the $v$s as column vectors and make them columns of a matrix. In general, if $M$ is any matrix, the nullspace of $M$ is $\{0\}$ precisely when its columns are linearly independent, and the range of $M$ is precisely the span of its columns--- so as the $v_1, \dots, v_k$ are a basis for $W$, the matrix $T$ formed with these vectors as columns has the two properties you want.]</p>
<p>In this example, doing (1) I see that your $V$ is the nullspace of the $1 \times 4$ matrix $A = (1,-1,2,4)$.</p>
<p>Doing (2) I want to find the general solution to $Ax = 0$, with $A = (1,-1,2,4)$ and $x$ the $4 \times 1$ column vector of variables. The augmented matrix for this system is $(1,-1,2,4,0)$ and it is already in row echelon form: the variables $x_2 = s$, $x_3 = t$ and $x_4 = u$ are "free", and when they have been assigned values $x_1$ must then satisfy $x_1 - s + 2t + 4u = 0$, so that $x_1 = s - 2t - 4u$. The general solution to $Ax = 0$ is thus the set of all vectors of the form $(s - 2t - 4u, s, t ,u) = s (1,1,0,0) + t (-2, 0, 1, 0) + u(-4, 0, 0, 1)$; in other words, it is the span of the vectors $v_1 = (1,1,0,0)$, $v_2 = (-2,0,1,0)$, and $v_3 = (-4,0,0,1)$. By the general magic of this approach, these vectors are linearly independent, so they are a basis for the nullspace of $A$ (and hence a basis of $V$).</p>
<p>(3) I form the matrix $T = \begin{pmatrix} 1 & -2 & -4 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ with $v_1$, $v_2$, and $v_3$ as columns, and I see that it has all of the desired properties. Going back into the language of linear transformations, the linear transformation from $\mathbb{R}^3$ to $ \mathbb{R}^4$ whose matrix with respect to the standard basis is this matrix $T$, will do the job.</p>
<p>It is worth pointing out that in step (2) of this procedure (finding a basis for the nullspace of $A$) we are making an arbitrary choice in how we do that (any vector space has many bases; somebody with a different method for calculating the basis of a nullspace of a matrix may well get a different set of vectors). So there are some degrees of freedom here and this problem does not have a unique solution. If you think about it a while you may be able to describe the set of all possible solutions to this problem. </p>
| 4
|
linear-algebra
|
Regarding orthonormal basis
|
https://math.stackexchange.com/questions/26779/regarding-orthonormal-basis
|
<p>I'm confronted with this question:</p>
<blockquote>
<p>Let <span class="math-container">$V$</span> be an inner product space and <span class="math-container">$B=\{u_{1}, ..., u_{n}\}$</span> a basis of <span class="math-container">$V$</span>.</p>
<p>Suppose there exists <span class="math-container">$\lambda_{1},...,\lambda_{n} \in F (=R \text{ or } C)$</span> such that: <span class="math-container">$$||\sum_{k=1}^{n}\lambda_{k}u_{k}||^2=\sum_{k=1}^{n}|\lambda_{k}|^2$$</span></p>
<p>Prove or disprove: <span class="math-container">$B$</span> is an orthonormal basis.</p>
</blockquote>
<p>Not sure where to begin.</p>
<p>I tried finding a counter example in the case of <span class="math-container">$V=R^2$</span> using the identity: <span class="math-container">$$||u+v||^2=||u||^2+2Re\langle u, v\rangle+||v||^2$$</span> which gave me this result: <span class="math-container">$$||\lambda_{1}u_{1}+\lambda_{2}u_{2}||^2=|\lambda_{1}|||u_{1}||^2+2Re\langle\lambda_{1}u_{1},\lambda_{2}u_{2}\rangle+|\lambda_{2}|||u_{2}||^2=|\lambda_{1}|^2+|\lambda_{2}|^2$$</span></p>
<p>But I can't see how this helps me.</p>
<p>edit:</p>
<p>Here's a little follow up question if anyone is interested: suppose the equality above is true for all <span class="math-container">$\lambda_{1},...,\lambda_{n}$</span>, does that imply the basis is orthonormal?</p>
|
<p>It is false. A simple example would be to take $\lambda_i = 0$, $\forall i$ or for $i$'s which are not orthogonal to the rest.</p>
<p>If you assume $\lambda_i \neq 0$, $\forall i$, then here is a counter example in $2D$.</p>
<p>Let $u_1 = 2e_1 + e_2$ and $u_2 = e_1 + 2e_2$, where $e_1, e_2$ form the conventional orthonormal basis. Clearly this forms a basis. Let $\lambda_i \in \mathbb{R}$</p>
<p>Then $||\lambda_1 u_1 + \lambda_2 u_2 ||_2^2 = |\lambda_1|^2 + |\lambda_2|^2$ implies $(2 \lambda_1 + \lambda_2)^2 + (\lambda_1 + 2 \lambda_2)^2 = \lambda_1^2 + \lambda_2^2$.</p>
<p>Hence, $(4 \lambda_1^2 + 4 \lambda_1 \lambda_2 + \lambda_2^2) + (4 \lambda_2^2 + 4 \lambda_1 \lambda_2 + \lambda_1^2) = \lambda_1^2 + \lambda_2^2$.</p>
<p>Hence, $(4 \lambda_1^2 + 4 \lambda_1 \lambda_2) + (4 \lambda_2^2 + 4 \lambda_1 \lambda_2) = 0 \Rightarrow (\lambda_1 + \lambda_2)^2 = 0$</p>
<p>Hence, if we choose $\lambda_1 = - \lambda_2$ we are done.</p>
<p>Hence, if $u_1 = 2e_1 + e_2$ and $u_2 = e_1 + 2e_2$ and if $\lambda_1 = - \lambda_2 \in \mathbb{R}$, then we have
$$||\lambda_1 u_1 + \lambda_2 u_2||_2^2 = \lambda_1^2 + \lambda_2^2$$</p>
| 5
|
linear-algebra
|
Prove a 3x3 system of linear equations with arithmetic progression coefficients has infinitely many solutions
|
https://math.stackexchange.com/questions/27259/prove-a-3x3-system-of-linear-equations-with-arithmetic-progression-coefficients
|
<p>How can I prove that a 3x3 system of linear equations of the form:</p>
<p>$\begin{pmatrix}
a&a+b&a+2b\\
c&c+d&c+2d\\
e&e+f&e+2f
\end{pmatrix}
\begin{pmatrix}
x\\ y\\ z
\end{pmatrix}
=\begin{pmatrix}
a+3b\\
c+3d\\
e+3f
\end{pmatrix}$</p>
<p>for $a,b,c,d,e,f \in \mathbb Z$ will always have infinite solutions and will intersect along the line
$ r=
\begin{pmatrix}
-2\\3\\0
\end{pmatrix}
+\lambda
\begin{pmatrix}
1\\-2\\1
\end{pmatrix}$</p>
|
<p>First, consider the homogeneous system
$$\left(\begin{array}{ccc}
a & a+b & a+2b\\\
c & c+d & c+2d\\\
e & e+f & e+2f
\end{array}\right)\left(\begin{array}{c}x\\y\\z\end{array}\right) = \left(\begin{array}{c}0\\0\\0\end{array}\right).$$
If $(a,c,e)$ and $(b,d,f)$ are not scalar multiples of each other, then the coefficient matrix has rank $2$, so the solution space has dimension $1$. The vector $(1,-2,1)^T$ is clearly a solution, so the solutions are all multiples of $(1,-2,1)^T$. That is, the solutions to the homogeneous system are $\lambda(1,-2,1)^T$ for arbitrary $\lambda$.</p>
<p>Therefore, the solutions to the inhomogeneous system are all of the form $\mathbf{x}_0 + \lambda(1,-2,1)^T$, where $\mathbf{x}_0$ is a particular solution to this system. Since $(-2,3,0)$ is a particular solution always, then all solutions have the described form.</p>
<p>If one of $(a,c,e)$ and $(b,d,f)$ is a multiple of the other, though, then there are other solutions: the matrix has rank $1$, so the nullspace has dimension $2$. Say $(a,c,e) = k(b,d,f)$ with $k\neq 0$, then there is another solution: $(-1-\frac{1}{k},1,0)$ would also be a solution to the system, so that the solutions to the inhomogeneous system would be of the form
$$r = \left(\begin{array}{r}-2\\3\\0\end{array}\right) + \lambda\left(\begin{array}{r}1\\-2\\1\end{array}\right) + \mu\left(\begin{array}{r}-1-\frac{1}{k}\\1\\0\end{array}\right).$$
This includes the solutions you have above, but also others. (If $k=0$, then you can use $(0,-2,1)$ instead of $(-1-\frac{1}{k},1,0)$) </p>
<p>If $(b,d,f)=(0,0,0)\neq (a,c,e)$, then $(1,0,-1)$ can be used instead of $(-1-\frac{1}{k},1,0)$ to generate all solutions.</p>
<p>And of course, if $(a,b,c)=(b,d,f)=(0,0,0)$, then every vector is a solution.</p>
<p>In all cases, you have an infinite number of solutions that <em>includes</em> all the solutions you give (but there may be solutions that are not in that line).</p>
| 6
|
linear-algebra
|
Trying to find $f(x)\in F[x]$ such that $f(A)=A^{-1}$
|
https://math.stackexchange.com/questions/29580/trying-to-find-fx-in-fx-such-that-fa-a-1
|
<p>Given an invertible $3\times 3$ matrix:</p>
<p>$A = \begin{pmatrix}
1 & 2 & 2 \\
1 & 2 & -1 \\
-1 & 1 & 4
\end{pmatrix}$</p>
<p>I am trying to find $f(x)$ from $F[x]$ such that $A^{-1}=f(A)$. To do so, I want to use the result of <a href="https://math.stackexchange.com/questions/29158">a previous question</a>, which says that $f(A)$ is invertible if and only if $f$ and the minimal polynomial of $A$ are relatively prime.</p>
|
<p>You can use the following two facts:</p>
<ul>
<li>Every square matrix is a zero of its characteristic polynomial.</li>
<li>The constant term of the characteristic polynomial of a matrix is its determinant.</li>
</ul>
<p>Combining these two things you can write the characteristic polynomial $c_A(x) = x \cdot p(x) + det(A)$. From this, you can see that the polynomial</p>
<p>$f(x) = -\frac{1}{det(A)}p(x)$</p>
<p>has the desired property (since $c_A(A)=0$ implies that $A \cdot p(A) = - det(A)$.)</p>
| 7
|
linear-algebra
|
Help understanding this example of a linear operator which rotates each vector $v$ about the z-axis by an angle $\theta$
|
https://math.stackexchange.com/questions/49267/help-understanding-this-example-of-a-linear-operator-which-rotates-each-vector
|
<blockquote>
<p>Let <span class="math-container">$T: \mathbb{R}^{3} \to \mathbb{R}^{3}$</span> be the following linear operator, which rotates each vector <span class="math-container">$v$</span> about the <span class="math-container">$z$</span>-axis by an angle <span class="math-container">$\theta$</span>: <span class="math-container">$T(x,y,z) = (x\cos\theta-y\sin\theta, x\sin\theta+y\cos\theta, z)$</span>.</p>
<p>Observe that each vector <span class="math-container">$w = (a,b,0)$</span> in the <span class="math-container">$xy$</span>-plane <span class="math-container">$W$</span> remains in <span class="math-container">$W$</span> under the mapping <span class="math-container">$T$</span>; hence, <span class="math-container">$W$</span> is <span class="math-container">$T$</span>-invariant. Observe also that the <span class="math-container">$z$</span>-axis <span class="math-container">$U$</span> is invariant under <span class="math-container">$T$</span>. Furthermore, the restriction of <span class="math-container">$T$</span> to <span class="math-container">$W$</span> rotates each vector about the origin <span class="math-container">$0$</span>, and the restriction of <span class="math-container">$T$</span> to <span class="math-container">$U$</span> is the identity mapping of <span class="math-container">$U$</span>.</p>
</blockquote>
<p>Could someone please help explain this example to me?</p>
<p>First, why is the domain <span class="math-container">$\mathbb{R}^{3}$</span>? If I had just seen this linear operator, I would have written <span class="math-container">$T: \mathbb{R}^{4} \to \mathbb{R}^{3}$</span> with <span class="math-container">$T(x,y,z,\theta) = (x\cos\theta-y\sin\theta, x\sin\theta+y\cos\theta, z)$</span>... is that incorrect? Would <span class="math-container">$\theta$</span> just be given "on the side" somewhere?</p>
<p>Second, where do the formulas <span class="math-container">$x\cos\theta - y\sin\theta$</span> and <span class="math-container">$x\sin\theta+y\cos\theta$</span> come from? Are they unique? At the moment I am not looking at them thinking "oh right, that's a rotation of angle <span class="math-container">$\theta$</span>...".</p>
<p>Finally, in general with regard to invariance, is that the same as saying the operator is an endomorphism when it comes to a subspace?</p>
<p>Thank you for any help!</p>
|
<p>First, the number $\theta$ is a parameter which you should think of as some number fixed for all time (or, it's "on the side" as you put it). The function from $\mathbb{R}^4\rightarrow\mathbb{R}^3$ you described is <em>not</em> linear in $\theta$.</p>
<p>Second, the formulas $x\cos\theta - y\sin\theta$ and $x\sin\theta + \cos\theta$ are the standard formulas for rotation by angle $\theta$. To see this, consider what this means in terms of a basis. For example, if we're rotating everything by $\theta$, where should the point $(1,0)$ go? (Draw it out if you're not convinces). It should go to the point $(\cos\theta,\sin\theta)$. Plugging in $x = 1$ and $y=0$, the formulas you give agree with that.</p>
<p>Likewise, where should $(0,1)$ go if we rotate by $\theta$? It should go to $(-\sin\theta, \cos\theta)$ as you can verify by sketching a picture.</p>
<p>Putting these together and using linearity gives the standard rotation equtions you wrote down.</p>
<p>Finally, given a linear map $T:V\rightarrow V$, a subspace $W\subseteq V$ is invariant under $T$ if $TW\subseteq W$, that is if you plug in a vector in $W$ into $T$ it spits out a vector in $W$. It's equivalent to saying $T$ restricts to an endomorphism of $W$.</p>
| 8
|
linear-algebra
|
On a matrix factorization and the Gram-Schmidt process
|
https://math.stackexchange.com/questions/67803/on-a-matrix-factorization-and-the-gram-schmidt-process
|
<p>Given a real square matrix <span class="math-container">$A$</span>, we can factor it as <span class="math-container">$$A = QR$$</span> where <span class="math-container">$Q$</span> is orthogonal and <span class="math-container">$R$</span> is upper triangular. The entries of <span class="math-container">$R$</span> have a simple geometric interpretation in terms of the vectors one gets doing the Gram-Schmidt process on the columns of <span class="math-container">$A$</span>. In particular, if <span class="math-container">$a_i$</span> is the <span class="math-container">$i$</span>'th column of <span class="math-container">$A$</span> and <span class="math-container">$e_j$</span> is the <span class="math-container">$j$</span>'th vector produced by the Gram-Schmidt process, then for <span class="math-container">$i<j$</span>, <span class="math-container">$R_{ij}=\langle e_i, a_j \rangle$</span>. This is spelled out, for example, in the <a href="http://en.wikipedia.org/wiki/QR_decomposition" rel="nofollow noreferrer">Wikipedia page on QR</a>.</p>
<p><strong>My question:</strong> Suppose <span class="math-container">$A$</span> is full rank and so <span class="math-container">$R$</span> is invertible. What interpretation, if any, do the entries of <span class="math-container">$R^{-1}$</span> have?</p>
<p><strong>My motivation:</strong> I need to work out something about <span class="math-container">$R^{-1}$</span> if the columns of <span class="math-container">$A$</span> satisfy a certain property. Its a bit involved to go into here, but any way I could reason about the entries of <span class="math-container">$R^{-1}$</span> in terms of the geometry of the columns of <span class="math-container">$A$</span> would be helpful.</p>
<p><strong>Edited:</strong> Perhaps I should say that I do realize that <span class="math-container">$AR^{-1}=Q$</span>. In other words, the Gram-Schmidt process produces linear combinations of the columns of <span class="math-container">$A$</span> that are orthogonal, and the coefficients of those linear combinations are precisely in the columns of <span class="math-container">$R^{-1}$</span>. However, I'm still wondering if a more direct geometric interpretation can be given - something like <span class="math-container">$R_{ij} = \langle e_i, a_j \rangle$</span>.</p>
| 9
|
|
linear-algebra
|
Finding the matrix of this linear transformation
|
https://math.stackexchange.com/questions/91324/finding-the-matrix-of-this-linear-transformation
|
<blockquote>
<p>We're given <span class="math-container">$V$</span>, which is an <span class="math-container">$n$</span> dimensional vector space. <span class="math-container">$T : V \to V$</span> is a linear transformation. There is a vector <span class="math-container">$v \in V$</span> such that <span class="math-container">$T^n(v) = 0$</span>. We're also told that the vectors <span class="math-container">$T^{n-1}(v), T^{n-2}(v), \ldots, T(v), v$</span> form a basis for <span class="math-container">$V$</span>.</p>
<p>The questions are:</p>
<ol>
<li>If <span class="math-container">$n = 4$</span>, calculate the matrix of <span class="math-container">$T$</span> w.r.t the basis.</li>
<li>If <span class="math-container">$n = 4$</span>, calculate the matrix for <span class="math-container">$T^n$</span> for <span class="math-container">$n = 2,3,4$</span>.</li>
</ol>
</blockquote>
<p>From a previous question, I know that if you want to form the matrix for a transformation, you simply compute the value for <span class="math-container">$T$</span> at the basis, and express your answer as a matrix w.r.t the basis. But, we can't really "compute" in this case because we don't know what the actual transformation is.</p>
<p>Also, when they say <span class="math-container">$T^2(v)$</span>, do they just mean <span class="math-container">$T(T(v))$</span>? If so, I suppose <span class="math-container">$T(T^{n-1}(v)) = 0$</span>, but I'm not sure what else we can figure out or even how to construct a matrix.</p>
<p>As for the second question, I don't really see what they're asking.</p>
<p>Lastly, say we do find a matrix, call it <span class="math-container">$M$</span>. Let's say I have a vector <span class="math-container">$u$</span>, and say <span class="math-container">$T(u) = s$</span>, where <span class="math-container">$s$</span> is some other vector. Does the relationship <span class="math-container">$Mu = s$</span> always hold in this case?</p>
<p>Thanks a bunch for all your help!</p>
|
<p>Yes, $T^2(v)$ means $T(T(v))$ in general $T^k(v)$ is composition of $T$ with itself $k$ times. And you are correct about how we should go about this problem, compute how the transformation acts on the basis. I will describe the process for how to do the first part below, and to do the second part you will do the exact same thing, but replace the transformation $T$ with the transformations $T^n$ for each $n$.</p>
<p>For $n = 4$ we have the basis as $$ v_1 = T^3(v), v_2 = T^2(v), v_3 = T(v), v_4=v$$</p>
<p>To compute the matrix of a transformation with respect to an ordered basis we simply compute how the transformation acts on that basis. In this case we have
$$T(v_1) = T(T^3(v)) = T^4(v) = 0$$ by our assumption that $T^n(v) = 0$ for each $n$.
Then we have $$T(v_2) = T(T^2(V)) = T^3(V) = v_1,$$ $$T(v_3) = T(T(v)) = T^2(v) = v_2$$ and $$T(v_4) = T(v) = v_3$$ </p>
<p>So now we compose these into a matrix and get </p>
<p>$$[T]_{(v_1, v_2, v_3, v_4)} = \left(\begin{smallmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 &0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0\end{smallmatrix}\right)$$ </p>
<p>You should be able to preform the same process for the transformation $T^n$ for each $n$ required in part (b). </p>
| 10
|
linear-algebra
|
Find values for $a$, $b$, $c$ that make this linear system solvable?
|
https://math.stackexchange.com/questions/213580/find-values-for-a-b-c-that-make-this-linear-system-solvable
|
<p>I came along with the following exercise that I developed poorly. May anybody give me some light? See:</p>
<blockquote>
<p>How to find a solution involving <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> to make the following system consistent? Find the solutions when possible.</p>
<p><span class="math-container">$$x + y + z + t = a$$</span>
<span class="math-container">$$5y + 2z + 4t = b$$</span>
<span class="math-container">$$3x - 2y + z - t = c$$</span></p>
</blockquote>
<p>Well. First I tried to reduce the system to the reduced row echelon form, with not very success. What I got is:</p>
<blockquote>
<p><span class="math-container">$$x - \frac{2}{5}t = \frac{5a - b}{5}$$</span>
<span class="math-container">$$y + \frac{2}{5}t = \frac{-c + 3a}{5}$$</span>
<span class="math-container">$$z + t = \frac{c - 3a + b}{5}$$</span></p>
</blockquote>
<p>I thought it would help, but I don't know how to resume the exercise.</p>
<p>Any tips?</p>
<p>Thank you.</p>
|
<p>The very first step in row reduction gives the new equation
$$ -5y -2z -4t = c-3a$$
Since the left-hand side of this is minus the left-hand side of the equation for $b$, the equations imply $b=3a-c$.</p>
<p>On the other hand, that same first step of row reduction shows that the rank of the equation system is at least 2, so the column space is at least 2-dimensional. The condition $b=3a-c$ already restricts the possible $(a,b,c)$ to a 2-dimensional subspace, to there is no room for further restrictions.</p>
<p>So the possible $(a,b,c)$ are exactly those that satisfy $b=3a-c$ (or equivalently $3a-b-c=0$ or $b+c=3a$),</p>
| 11
|
linear-algebra
|
Linear Algebra (linear transformations)
|
https://math.stackexchange.com/questions/255298/linear-algebra-linear-transformations
|
<blockquote>
<p>13. Suppose <span class="math-container">$V$</span> and <span class="math-container">$W$</span> are finite-dimensional vector spaces and <span class="math-container">$T:V \to W$</span> is an isomorphism. Then there exist bases <span class="math-container">$\mathcal{B}$</span> and <span class="math-container">$\mathcal{C}$</span>, for <span class="math-container">$V$</span> and <span class="math-container">$W$</span> respectively, such that <span class="math-container">$[T]_{\mathcal{C},\mathcal{B}}$</span> is the identity matrix.</p>
<p>14. Let <span class="math-container">$T:V\to\mathbb{R}$</span> be a linear transformation. Suppose <span class="math-container">$\{v_1,\dots,v_n\}$</span> is a basis for <span class="math-container">$\ker(T)$</span>. Suppose also that <span class="math-container">$v \in V$</span>, <span class="math-container">$v \ne 0$</span>, is <em>not</em> in <span class="math-container">$\ker(T)$</span>. Prove <span class="math-container">$\{v,v_1,\dots,v_n\}$</span> is a basis for <span class="math-container">$V$</span>.</p>
<p>15. Show that any linear transformation <span class="math-container">$T:V \to W$</span> may be written as a sum of linear transformations <span class="math-container">$T = T_1 + \cdots + T_k$</span> for some <span class="math-container">$k$</span>, where each <span class="math-container">$T_i$</span> is a linear transformation of rank <span class="math-container">$1$</span>.</p>
</blockquote>
<p>Hey guys, I have a couple of questions I need help with.
It'd be great if I could get any sort of help/hints! Thanks.</p>
|
<p>For 14 recall that $\mathbb{R}$ is a vector space over itself with dimension $1$ . what is $dim(Im(T))$ ?</p>
<p>For 15 recall there is a matrix $A$ s.t $Tv=Av$ for all $v$.</p>
| 12
|
linear-algebra
|
Linear Algebra Proof
|
https://math.stackexchange.com/questions/255898/linear-algebra-proof
|
<blockquote>
<p>If A is a <span class="math-container">$m\times n$</span> matrix and <span class="math-container">$M = (A \mid b)$</span> the augmented matrix
for the linear system <span class="math-container">$Ax = b$</span>.</p>
<p>Show that either<br> <br><span class="math-container">$(i) \operatorname{rank}A = \operatorname{rank}M$</span>, or<br />
<span class="math-container">$(ii)$</span> <span class="math-container">$\operatorname{rank}A = \operatorname{rank}M - 1$</span>.</p>
</blockquote>
<p>My attempt:</p>
<p>The rank of a matrix is the dimension of its range space. Let the column vectors of <span class="math-container">$A$</span> be <span class="math-container">$a_1,\ldots,a_n$</span>. If <span class="math-container">$\text{rank}\;A = r$</span>, then <span class="math-container">$r$</span> pivot columns of <span class="math-container">$A$</span> form a basis of the range space of <span class="math-container">$A$</span>. The pivots columns are linearly independent. For the matrix <span class="math-container">$M = (A \mid b)$</span>, there are only two cases. Case <span class="math-container">$(i)$</span>: <span class="math-container">$b$</span> is in the range of <span class="math-container">$A$</span>. Then the range space of <span class="math-container">$M$</span> is the same as the range space of <span class="math-container">$A$</span>. Therefore <span class="math-container">$\operatorname{rank}M = \operatorname{rank}A$</span>.</p>
<p><strong>I am stuck on how to do case <span class="math-container">$(ii)$</span>?</strong></p>
|
<p>Suppose the columns of $A$ have exactly $r$ linearly independent vectors. If $b$ lies in their span, then $\operatorname{rank} A=r=\operatorname{rank} M$. If not, then the columns of $A$ together with $b$ have exactly $(r+1)$ linearly independent vectors, so that $\operatorname{rank} A+1=r+1=\operatorname {rank} M$.</p>
| 13
|
linear-algebra
|
Question on a proof about the Rank of a Matrix
|
https://math.stackexchange.com/questions/301629/question-on-a-proof-about-the-rank-of-a-matrix
|
<p>The question is:</p>
<blockquote>
<p>Give a formal proof for the following statement:
Given a matrix A and a scalar c, show that rank(cA) = rank(A)</p>
</blockquote>
<p>Here are the steps that I took to go about the proof:</p>
<blockquote>
<p>(1) Prove this claim: Let v1, v2, ..., vN be vectors</p>
<p>then {v1, v2, ..., vN} is linearly independent <==> {c* v1, c*v2, ..., c * vN} is also lin. ind.</p>
</blockquote>
<p>I don't type out the whole thing here, but the proof is trivial by playing around with the coefficients</p>
<blockquote>
<p>(2) Let (c * A_ij) where i = 1, 2, ..., m; and j is fixed where j belongs to {1, 2, ..., n} denotes a linearly independent column in matrix cA</p>
<p>(3) Then I let S = { (c * A_ij)} be the set of all linearly independent columns in matrix cA, where each element of S satisfies (2)</p>
<p>(4) By how I define the set S, all elements in S are lin. ind. columns in matrix cA.</p>
</blockquote>
<p>Then I use the claim (1) to say that columns A_ij of matrix A must also be lin. ind.</p>
<p>I also note that by definition of the rank, it's the maximum number of lin. ind. columns (or rows) in a matrix. So I think rank(cA) is basically the cardinality of the set S. Then by (4), I conclude that when I "move" from each lin. ind. column of matrix cA to each lin. ind. column in matrix A, I didn't change the number of lin. ind. columns. Thus, rank(cA) = rank(A).</p>
<p>Would someone please help me check if there is anything missing or wrong in my proof ? Somehow I feel a bit shaky on how I define the indices for the linearly independent columns in matrix cA.
Thank you very much ^_^</p>
|
<p>The rank is the dimension of the range.</p>
<p>Now try to prove that for all $c\neq 0$, the range of $A$ is equal to the range of $cA$.</p>
<p>Hints:
$$
cA(x)=A(cx)\quad\mbox{and}\quad A(x)=cA\left( \frac{1}{c}x\right).
$$</p>
<p>This is easier like this.</p>
| 14
|
linear-algebra
|
Dual Space Questions
|
https://math.stackexchange.com/questions/354235/dual-space-questions
|
<blockquote>
<p>Let <span class="math-container">$V$</span> be a finite dimensional vector space over a field <span class="math-container">$F$</span>.</p>
<p>Let <span class="math-container">$v\in V$</span> with <span class="math-container">$v$</span> not equal to <span class="math-container">$0$</span>. Show that there is <span class="math-container">$\varphi \in V^*$</span> such that <span class="math-container">$\varphi(v)$</span> is not equal to <span class="math-container">$0$</span>.</p>
</blockquote>
<p>I know that <span class="math-container">$V^*$</span> is the vector space consisting of all linear functionals on <span class="math-container">$V$</span> with the operations of addition and scalar multiplication. Not sure how to start this proof however. Any help would be appreciated. Thanks.</p>
|
<p><strong>Hint:</strong> Extend $v$ to a basis of $V$, say $\{v=v_1, v_2,v_3,\dots v_n\}$. A function $T:V\rightarrow F$is linear iff $T\bigg(\sum_{i=1}^n\lambda_iv_i\bigg) =\sum_{i=1}^n\lambda_iT(v_i)$ for any $\lambda_i \in F$. Note that $T$ is uniquely determined by its action on a basis, so defining the values of $T(v_i)$ defines $T$ uniquely, and when defining $T$ on a basis, the values can be anything you want.</p>
| 15
|
linear-algebra
|
If the union of $A$ and $B$ is linearly independent then the intersection of the spans $= \{0\}$
|
https://math.stackexchange.com/questions/392636/if-the-union-of-a-and-b-is-linearly-independent-then-the-intersection-of-the
|
<blockquote>
<p><span class="math-container">$\newcommand{\sp}{\operatorname{sp}}$</span> Let <span class="math-container">$V$</span> be a vector space over <span class="math-container">$F$</span> field, and let <span class="math-container">$A,B$</span> be two different, disjoint, non-empty sets of vectors from <span class="math-container">$V$</span>.</p>
<p>Prove or disprove the following:
If <span class="math-container">$A \cup B$</span> are linearly independent, <span class="math-container">$\sp(A) \cap \sp(B) = \{0\}$</span>.</p>
</blockquote>
<p>It's an easy one to example over <span class="math-container">$\mathbb{R}^2$</span>, because the <span class="math-container">$\text{span}$</span> of all <span class="math-container">$\mathbb{R}^2$</span> vectors has <span class="math-container">$(0,0)$</span> in it, but something keeps telling me that it might be disproved for other vector spaces.</p>
|
<p>Hint:</p>
<p>$$x\in Sp(A)\cap Sp(B)\implies \exists\,a_1,...,a_k\in A\;,\;b_1,...,b_m\in B\,,\,c_1,...,c_{k+m}\in \Bbb F\;\;s.t.$$</p>
<p>$$x=\sum_{i=1}^kc_ia_i=\sum_{i=1}^mc_{k+1}b_i\implies c_1a_1+\ldots c_ka_k-c_{k+1}b_1-\ldots c_mb_m=0\implies$$</p>
<p>$$\implies c_r=0\;\;\forall r=1,\ldots,m\;,\;\text{since}\;a_i,b_k\in A\cup B\ldots$$</p>
| 16
|
linear-algebra
|
Projection and inner product space
|
https://math.stackexchange.com/questions/397065/projection-and-inner-product-space
|
<p>Definition: Let <span class="math-container">$V$</span> be vector space, and <span class="math-container">$U$</span>, <span class="math-container">$W$</span> be two subspaces such that <span class="math-container">$V=U\oplus W$</span>.</p>
<p>We know that there exists for each <span class="math-container">$v \in V$</span> only one <span class="math-container">$u \in U$</span> and only one <span class="math-container">$w \in W$</span> such that <span class="math-container">$v=u+w$</span>. Using this, we define a projection <span class="math-container">$P_{U,V}\colon V\longrightarrow V$</span> to be: <span class="math-container">$P_{U,W}(v)=u$</span></p>
<p>Now my question is this:</p>
<blockquote>
<p>Let <span class="math-container">$V$</span> be an inner product space, and let <span class="math-container">$U$</span> be subspace of <span class="math-container">$V$</span>. Let <span class="math-container">$\left\{e_{1},\ldots ,e_{n}\right\}$</span> be an orthogonal basis for <span class="math-container">$U$</span>.</p>
<p>Let us define the orthogonal projection <span class="math-container">$P_{U}\colon V\longrightarrow V$</span> as</p>
<p><span class="math-container">$$P_{U}(v)=\sum_{i=1}^n \langle v,e_{i}\rangle e_{i}$$</span></p>
<p>I need to prove that <span class="math-container">$P_{U}=P_{U,U^{\perp}}$</span>.</p>
</blockquote>
<p><span class="math-container">$P_{U,U^{\perp}}$</span> is according to the definition in beginning.</p>
<p>How do I do it? I am sitting 1 hour on that and I have no clue.</p>
<p>plus I need to proof that <span class="math-container">$P_{U}$</span> is self adjoint</p>
<p>Basically, I have to prove that if <span class="math-container">$v=u+w$</span>, then <span class="math-container">$P_{U}(v)=\sum_{i=1}^n \langle v,e_{i}\rangle e_{i}$</span> and</p>
<p><span class="math-container">$P_{U,U\perp}(v)=u$</span> meaning I have to show <span class="math-container">$u=\sum_{i=1}^n \langle v,e_{i}\rangle e_{i}$</span>.</p>
<p>But how do I do it ?</p>
|
<p>Hint: you can prove $P_U=P_{U,U^\perp}$ by checking that $P_U(v)=P_{U,U^\perp}(v)$ for every $v\in V$.</p>
<hr>
<p>Added: You've got the start of the right strategy, but let me modify it it a bit. Let's start with your $v=u+w$ with $u\in U$ and $w\in U^\perp$ (I think you might be forgetting about this last fact.)</p>
<p>You know that $P_{U,U^\perp}(v)=u$</p>
<p>Since the $e_i$ are a basis for $U$, you can write $u=\sum \alpha_ie_i$ so that $v=\sum \alpha_ie_i +w$ where $w\in U^\perp$. Now, compute $P_{U}(v)=P_{U}(\sum\alpha_ie_i +w)=\_\_\_\_$.</p>
<hr>
<p>Next hint:
$P_{U}(\sum\alpha_ie_i +w):=\sum_j \langle \sum_i\alpha_ie_i +w,e_j\rangle e_j$</p>
| 17
|
linear-algebra
|
On $C^0 [0, 1]$, define $f \cdot g = \int_0^1 f(x) g(x) dx$. For $f(x) = x$.
|
https://math.stackexchange.com/questions/561785/on-c0-0-1-define-f-cdot-g-int-01-fx-gx-dx-for-fx-x
|
<blockquote>
<p>a. find <span class="math-container">$||f||$</span></p>
<p>b. find all linear polynomials that are orthogonal to <span class="math-container">$x$</span></p>
</blockquote>
<p>Okay, so I know that</p>
<p><span class="math-container">$||f|| = \sqrt(f_1^2 + f_2^2 +... + f_n^2)$</span></p>
<p>and that linear polynomials are of the form <span class="math-container">$ax + b$</span></p>
<p>I am not sure however, how to apply these to the actual question..</p>
|
<p>If you have a scalar product $(f,g)$ then the norm is defined by $\|f\| = \sqrt{(f,f)}$. In your case
$$ \|f\|^2 = \int_0^1 f(x)^2 dx$$</p>
<p>To find $\|f\|$ you just need to integrate $x^2$ from $0$ to $1$.</p>
<p>If you have a linear polynomial $ax+b$ which is orthogonal to $x$ then their scalar product is zero, so
$$ \int_0^1 x (ax+b)dx =0$$
and computing the integral you arrive at a necessary and sufficient condition for $a,b$.</p>
| 18
|
linear-algebra
|
Prove or disprove: If $Null(A-B)=\mathbb R^n$ then $ A=B $
|
https://math.stackexchange.com/questions/617541/prove-or-disprove-if-nulla-b-mathbb-rn-then-a-b
|
<blockquote>
<p><span class="math-container">$A$</span> and <span class="math-container">$B$</span> are matrices of order <span class="math-container">$m\times n$</span>.</p>
<p>Prove or disprove: If <span class="math-container">$Null(A-B)=\mathbb R^n$</span> then <span class="math-container">$ A=B $</span></p>
</blockquote>
<p>Well I'm not sure I understand, does <span class="math-container">$Null(A-B)=\mathbb R^n$</span> means that the null span of <span class="math-container">$A$</span> minus the null span of <span class="math-container">$B$</span> equal to the span of <span class="math-container">$\mathbb R^n$</span> ?</p>
<p><strong>Edit:</strong></p>
<p>I got the solution: Let <span class="math-container">$(A-B)=M$</span> so using this: <span class="math-container">$$rank(M)+dim(null(M))=n$$</span> we can infer that <span class="math-container">$rank(M)=0 $</span> therefore <span class="math-container">$M=O\Rightarrow A=B$</span>.</p>
<p>But I don't understand why the rank is 0 and why does that mean that the matrices' difference is <span class="math-container">$O$</span>. Can anyone offer some insight ?</p>
|
<p>You might try by contrapositive: if $A\neq B$, then there must be some vector$~v$ such that $Av\neq Bv$. Then $(A-B)v\neq\ldots$</p>
<p>(continued) $\ldots\neq0$, since $(A-B)v=Av-Bv$ by definition. So $\def\Null{\operatorname{Null}}v\notin\Null(A-B)$ which proves $\Null(A-B)\neq\Bbb R^n$. We have shown $A\neq B\implies\Null(A-B)\neq\Bbb R^n$, which (as contrapositive) is equivalent to $\Null(A-B)=\Bbb R^n\implies A=B$.</p>
| 19
|
linear-algebra
|
Proving that a set of functions is a vector space
|
https://math.stackexchange.com/questions/984816/proving-that-a-set-of-functions-is-a-vector-space
|
<p>We've given that <span class="math-container">$V$</span> is a vector space and that <span class="math-container">$L(V)$</span> the set with functions <span class="math-container">$T:V\rightarrow \mathbb{R}$</span> s.t. <span class="math-container">$T(a_1f_1+a_2f_2)=a_1T(f_1)+a_2T(f_2)$</span>. We must show that <span class="math-container">$L(V)$</span> is a vector space.</p>
<p>I was looking at <a href="https://math.stackexchange.com/questions/49733/prove-in-full-detail-that-the-set-is-a-vector-space">this</a> question asked before, and the top answer talks about the set inheriting properties from <span class="math-container">$\mathbb{R^2}$</span>. I was wonering if this were also the case in this problem, or whether we must show all eight properties of a vector space hold for this set. In the second case, I'm having trouble showing four properties. Namely:</p>
<ol>
<li>There always exists an <span class="math-container">$f,g\in L(V)$</span> s.t. <span class="math-container">$f+g=0$</span></li>
<li><span class="math-container">$c(kf)=(ck)f$</span></li>
<li><span class="math-container">$1f = f$</span></li>
<li><span class="math-container">$(c+k)f=cf+kf$</span></li>
</ol>
<p>I think I have an idea for the last one. Suppose <span class="math-container">$f\in L(V)$</span>. Then we have:</p>
<p><span class="math-container">$$(c+k)(f)(v) = (c+k)(f(v)) = cf(v)+kf(v)$$</span></p>
| 20
|
|
linear-algebra
|
Homogeneous system of equations , and sub-set K of $R^4$
|
https://math.stackexchange.com/questions/852442/homogeneous-system-of-equations-and-sub-set-k-of-r4
|
<p>Given K,L are sub-sets of <span class="math-container">$K^4$</span>:</p>
<p><span class="math-container">$K = \{(-5,8,14,0),(-1,4,2,4)\}, L = \{(0,1,-10,8),(0,3,-1,5)\}$</span></p>
<blockquote>
<p>Find a homogeneous system of equations that its solutions are Spanned by K.</p>
<p>Also prove that L spans the solutions of that system too.</p>
</blockquote>
<p>by "solutions" I mean if Ax=b then "x" are the solutions.</p>
<p>I believe the answers are <span class="math-container">${{10} \over {3}}x+{{1} \over {3}}y+z=0 , {{8} \over {3}}x+{{-5} \over {3}}y -t = 0$</span></p>
<p>Any ideas how to approach this question? I believe this is very easy and something is tricky here.</p>
|
<p>Both span(K) and span(L) are two-dimensional, so in $\mathbb{R}^4$ we expect to have two linear equations (since # of equations + # dimensions = dimension of larger vector space). Logically, there should be infinitely many possible pairs of equations that work, similar to how a line in 3D space has infinitely many planes passing through it.</p>
<p>Let one of the equations be $ax + by + cz + dt = 0$. Since both of the basis vectors of $K$ satisfy this equation, we can sub them both in, which will give us two linear equations with a, b, c and d as unknowns. Solving these simultaneously will give us infinitely many possible combinations of a, b, c, d that the points of K satisfy. Any two particular of these should be an appropriate pair of equations.</p>
<p>To show that L spans the same set of solutions, you just have to check that both vectors satisfy both the equations, and note that the vectors are also linearly independent and so have the correct number for the dimension.</p>
| 21
|
linear-algebra
|
If $AB = 0$, prove that the columns of matrix $B$ are vectors in the kernel of $A$
|
https://math.stackexchange.com/questions/1095451/if-ab-0-prove-that-the-columns-of-matrix-b-are-vectors-in-the-kernel-of
|
<blockquote>
<p>Let <span class="math-container">$A,B$</span> be <span class="math-container">$n\times n$</span> matrices.</p>
<p>If <span class="math-container">$AB=0$</span>, prove that the columns of matrix <span class="math-container">$B$</span> are vectors in the kernel of <span class="math-container">$Ax=0$</span>.</p>
</blockquote>
<p>I'm not sure how to approach this. I know that if <span class="math-container">$B = 0$</span> and <span class="math-container">$A$</span> isn't, then <span class="math-container">$Ax=0$</span> is when <span class="math-container">$x=0=B$</span>. But what if <span class="math-container">$A=0$</span>? Seems like in this case B doesn't have to be a part of the kernel.</p>
<p>Or perhaps I'm just missing something?</p>
|
<p><strong>Hint:</strong></p>
<p>$$ AB = \begin{pmatrix} Ab^1 && ... && Ab^n \end{pmatrix} $$</p>
<p>Where $b^i$ is the i-th column vector and the right side is the matrix you get by the multiplication.</p>
<p><strong>Details:</strong>
By comparing the two matrices you can now conclude that for all the column vectors $b$ of B it holds that:</p>
<p>$$ Ab = 0 $$</p>
<p>So all column vectors of B are in the kernel of A. </p>
| 22
|
linear-algebra
|
How do I find which set of functions is linearly independent?
|
https://math.stackexchange.com/questions/1097550/how-do-i-find-which-set-of-functions-is-linearly-independent
|
<blockquote>
<p>Choose the correct set of functions, which are not linearly independent.</p>
<ol>
<li><span class="math-container">$x^2-1$</span>, <span class="math-container">$2x^2-x+1$</span>, <span class="math-container">$3x^2-x$</span></li>
<li><span class="math-container">$1$</span>, <span class="math-container">$\tan x$</span>, <span class="math-container">$\cot x$</span></li>
<li><span class="math-container">$x^2$</span>, <span class="math-container">$x^3$</span>, <span class="math-container">$x^4$</span></li>
<li><span class="math-container">$\sin^2 x$</span>, <span class="math-container">$\cos^2 x$</span>, <span class="math-container">$\sin 2x$</span></li>
</ol>
</blockquote>
<p>I thought of adding two terms together to get another term. But how do I check if they are linearly dependent or independent?</p>
|
<p>Since
$3x^2-x-(x^2-1)=2x^2-x+1$</p>
<p>so the first set of vectors is linearly dependent.</p>
<p>For the rest find the <a href="http://mathworld.wolfram.com/Wronskian.html" rel="nofollow">Wronskian</a> </p>
<p>For example </p>
<p>$W(x^2,x^3,x^4)=6x^6$, thus the set $\{x^2,x^3,x^4\}$ is linearly independent if and only if $x\neq0$.</p>
<p>and have a look at , for example, <a href="http://www.amazon.co.uk/Linear-Algebra-Serge-Lang/dp/0387964126/ref=sr_1_1?s=books&ie=UTF8&qid=1420803239&sr=1-1&keywords=9780387964126" rel="nofollow">this book </a> or <a href="http://www.math.uwo.ca/~volds/math030notes/Unit20.pdf" rel="nofollow">this link</a> if you would like to learn more.</p>
| 23
|
linear-algebra
|
Projections onto a subspace (orthogonal vs. non-orthogonal matrix vs. basis matrix)
|
https://math.stackexchange.com/questions/1303925/projections-onto-a-subspace-orthogonal-vs-non-orthogonal-matrix-vs-basis-matr
|
<p>Suppose</p>
<blockquote>
<p><span class="math-container">$A$</span> is our matrix</p>
<p><span class="math-container">$B$</span> is our basis for the matrix <span class="math-container">$A$</span></p>
<p><span class="math-container">$Q$</span> is orthogonal basis for matrix <span class="math-container">$A$</span></p>
<p><span class="math-container">$P=A(A^TA)^{-1}A^T$</span></p>
</blockquote>
<p>Is the following true:</p>
<p><span class="math-container">$Px=(QQ^T)x=(B(B^TB)^{-1}B^T)x$</span></p>
<p>In plain English:</p>
<p>When I project onto a subspace, it doesn't matter what matrix I use, as long as their span is equal, meaning they span the same subspace.</p>
<p>I'm asking because sometimes there are less columns in <span class="math-container">$B$</span> and <span class="math-container">$Q$</span>, than in <span class="math-container">$A$</span>, so calculations are easier.</p>
<p>I'm an amateur mathematician, so I apologize if my wording is not correct.</p>
| 24
|
|
linear-algebra
|
understanding a linear transformation
|
https://math.stackexchange.com/questions/779180/understanding-a-linear-transformation
|
<p>Hello I'm trying to solve this question from Ron Larson's linear algebra textbook. But I'm just stuck on how to approach this question. Could someone please at least give me a hint on how to approach this sort of question?</p>
<blockquote>
<p>Suppose <span class="math-container">$T:\unicode{x211D}^2\rightarrow\unicode{x211D}^2$</span> such that
<span class="math-container">$T(1,0)=(0,1)$</span> and <span class="math-container">$T(0,1)=(1,0)$</span>.</p>
<p>i)Determine <span class="math-container">$T(x,y)$</span> for <span class="math-container">$(x,y)$</span> in <span class="math-container">$\unicode{x211D}^2$</span></p>
<p>ii)Give a geometric description of <span class="math-container">$T$</span></p>
</blockquote>
<p>Thank you</p>
|
<p>Note that $(x,y)=x(1,0)+y(0,1)$. Therefore, because $T$ is linear, $$T(x,y)=xT(1,0)+yT(0,1)=x(0,1)+y(1,0)=(y,x)$$
So $T$ maps the point $(x,y)$ to the point $(y,x)$. What does this do geometrically?</p>
| 25
|
linear-algebra
|
Dimension Theorem Corollary
|
https://math.stackexchange.com/questions/1198165/dimension-theorem-corollary
|
<p>Let $V$ and $W$ be vector spaces with $\dim V = \dim W$. If $T : V → W$ is
linear then $T$ is one-to-one if and only if $T$ is onto.
But this is true only when the dimensions of $V$ and $W$ are finite.
For instance I came across the example $T : P(R)\to P(R)$ such that $T(f(x))=f'(x)$. Here T is onto but not one-one.
Can we generalize the case for infinite dimensions too in any way?</p>
|
<p>Firstly, note that without the <a href="http://en.wikipedia.org/wiki/Axiom_of_choice" rel="nofollow noreferrer">axiom of choice</a>, we can't speak about the dimension of a vector space in general, because <a href="https://math.stackexchange.com/a/207992/26369">there would be vector spaces without a basis</a>! For the rest of this post, I will be assuming the axiom of choice without comment.</p>
<hr />
<p>Every vector space has a dimension (the only allowed size for a basis) which is either a natural number or an <a href="http://en.wikipedia.org/wiki/Cardinal_number" rel="nofollow noreferrer">infinite cardinal</a>. For brevity, I will use <span class="math-container">$n$</span> for <span class="math-container">$\dim V$</span>.</p>
<p>There are a few important theorems at play here:</p>
<h2>Theorem 1: Adding <span class="math-container">$0$</span></h2>
<blockquote>
<p>Suppose that <span class="math-container">$n=x+y$</span> for cardinal numbers <span class="math-container">$n$</span>, <span class="math-container">$x$</span>, and <span class="math-container">$y$</span>. Then if <span class="math-container">$x=0$</span>, <span class="math-container">$y=n$</span>. (This could be written more simply as <span class="math-container">$0+n=n$</span>, or similar.)</p>
</blockquote>
<h2>Theorem 2: Subtracting <span class="math-container">$n$</span></h2>
<blockquote>
<p>Suppose that <span class="math-container">$n=x+y$</span> for cardinal numbers <span class="math-container">$n$</span>, <span class="math-container">$x$</span>, and <span class="math-container">$y$</span>, with <span class="math-container">$n$</span> finite. Then if <span class="math-container">$y=n$</span>, <span class="math-container">$x=0$</span>.</p>
</blockquote>
<p>This theorem <strong>fails</strong> if <span class="math-container">$n$</span> is infinite, and it can do so in a very concrete way. If you remove one element from an infinite set of size <span class="math-container">$n$</span>, then you still have an infinite set of size <span class="math-container">$n$</span> left over.</p>
<h2>Theorem 3: A theorem about dimension</h2>
<blockquote>
<p>If <span class="math-container">$n$</span> is finite, then the only <span class="math-container">$n$</span>-dimensional subspace of <span class="math-container">$V$</span> is <span class="math-container">$V$</span> itself.</p>
</blockquote>
<p><strong>Proof:</strong> Essentially, a linearly independent set is either already a basis for <span class="math-container">$V$</span>, or can be extended by choosing independent vectors until you reach a basis for <span class="math-container">$V$</span>. Extending set of <span class="math-container">$n$</span> vectors would give you at least <span class="math-container">$n+1\ne n$</span> vectors (see Theorem 2) which is too many; a basis for the subspace must be a basis for <span class="math-container">$V$</span>.<span class="math-container">$\square$</span></p>
<p>This theorem <strong>fails</strong> if <span class="math-container">$n$</span> is infinite, for the same reason that Theorem 2 fails in that case. If you take a basis for <span class="math-container">$V$</span>, and remove, say, finitely many vectors from it, then you have a basis for a proper subspace of <span class="math-container">$V$</span> that has the same dimension.</p>
<h2>Theorem 4: Rank-Nullity</h2>
<blockquote>
<p><span class="math-container">$n=\dim \ker T+\dim \mathrm{im}\,T$</span>.</p>
</blockquote>
<p><a href="https://math.stackexchange.com/a/752069/26369">The rank-nullity theorem still holds</a> even when we may be talking about infinite cardinals.</p>
<h2>Theorem 5: Your corollary part 1</h2>
<blockquote>
<p>If <span class="math-container">$n$</span> is finite and <span class="math-container">$\dim W=n$</span>, then for any linear <span class="math-container">$T:V\to W$</span>, if <span class="math-container">$T$</span> is one-to-one then <span class="math-container">$T$</span> is onto.</p>
</blockquote>
<p><strong>Proof:</strong> If <span class="math-container">$T$</span> is one-to-one then <span class="math-container">$\dim \ker T=0$</span> so that <span class="math-container">$\dim \mathrm{im}\,T=n$</span> by theorem 1. By theorem 3 (here we use finiteness of <span class="math-container">$n$</span>) this forces <span class="math-container">$T$</span> to be onto. <span class="math-container">$\square$</span></p>
<p>This reliance on finiteness is unavoidable. <a href="http://en.wikipedia.org/wiki/Shift_operator#Sequences" rel="nofollow noreferrer">right-shift</a> is one-to-one from a space of one-sided infinite sequences (or similar) to itself, but not onto. In general, if <span class="math-container">$n$</span> is infinite then a bijection from "a basis for <span class="math-container">$V$</span>" to "all but one element of a basis for <span class="math-container">$W$</span>" generates a linear map akin to right-shift. Thus, this <strong>always has counterexamples</strong> if <span class="math-container">$n$</span> is infinite.</p>
<h2>Theorem 6: Your corollary part 2</h2>
<blockquote>
<p>If <span class="math-container">$n$</span> is finite and <span class="math-container">$\dim W=n$</span>, then for any linear <span class="math-container">$T:V\to W$</span>, if <span class="math-container">$T$</span> is onto then <span class="math-container">$T$</span> is one-to-one.</p>
</blockquote>
<p><strong>Proof:</strong> If <span class="math-container">$T$</span> is onto then <span class="math-container">$\dim \mathrm{im}\,T=n$</span>. By theorem 2 (here we use the finiteness of <span class="math-container">$n$</span>), <span class="math-container">$\dim \ker T=0$</span> so that <span class="math-container">$T$</span> is one-to-one. <span class="math-container">$\square$</span></p>
<p>This reliance on finiteness is unavoidable. The example in your question is onto but not one-to-one. Another common example is a <a href="http://en.wikipedia.org/wiki/Shift_operator#Sequences" rel="nofollow noreferrer">left-shift</a> operator. In general, if <span class="math-container">$n$</span> is infinite then a bijection from "all but one element of a basis for <span class="math-container">$V$</span>" to "a basis for <span class="math-container">$W$</span>" generates a linear map akin to left-shift if you send the remaining basis element of <span class="math-container">$V$</span> to <span class="math-container">$0_W$</span>. Thus, this <strong>always has counterexamples</strong> if <span class="math-container">$n$</span> is infinite.</p>
<hr />
<p>In conclusion, both halves of the corollary you mention have counterexamples no matter how nice spaces you pick <span class="math-container">$V$</span> and <span class="math-container">$W$</span> to be, and this ultimately boils down to the failure of theorem 2 in the infinite case. <strong>No</strong>, we cannot generalize.</p>
| 26
|
linear-algebra
|
Does $ABC=D\implies \det(ABC)=\det(D )$?
|
https://math.stackexchange.com/questions/1332815/does-abc-d-implies-detabc-detd
|
<p><span class="math-container">$${\color{brown}{\text{Question I am trying to solve:}}}$$</span></p>
<p>Let <span class="math-container">$A,B$</span> and <span class="math-container">$X$</span> be 7 x 7 matrices such that <span class="math-container">$\det A=1$</span>, <span class="math-container">$\det B=3$</span> and</p>
<p><span class="math-container">$$A^{-1}XB^{t}=-I_7$$</span></p>
<p>where <span class="math-container">$I_7$</span> is the 7 x 7 identity matrix. Calculate <span class="math-container">$\det X$</span>.</p>
<p><span class="math-container">$$\color{brown}{------------------------------------}$$</span></p>
<p><strong>The way I thought to solve this is if the following is true:</strong></p>
<p><span class="math-container">$$A^{-1}XB^{t}=-I_7 \implies \det(A^{-1}XB^{t})=\det(-I_7 )$$</span></p>
<hr />
<p><span class="math-container">$$\text{Then I could solve this by: }$$</span></p>
<p><span class="math-container">$$\det (A^{-1}XB^{t})=\det(-I_7)$$</span></p>
<blockquote>
<p><span class="math-container">$$\text{Note (since A, X, B are of the same size):}$$</span><span class="math-container">$$\bbox[8pt, border: 1pt solid green]{\det(A^{-1}XB^{t})=\det(A^{-1})\det(X)\det(B^t)}$$</span></p>
<p><span class="math-container">$$\text{Note:}$$</span>
<span class="math-container">$$\det(B^t)=\det (B)$$</span></p>
</blockquote>
<p><span class="math-container">$$\det(A^{-1})\det(X)\det(B^t)=\det(-I_7)$$</span></p>
<p><span class="math-container">$$\frac{1}{1} \det(X) \cdot 3 = -1 \implies \bbox[8pt,border: 2pt #06f solid]{\det X=-\frac{1}{3}}$$</span></p>
<hr />
<p><strong>So is it correct to do this :</strong></p>
<p><span class="math-container">$$A^{-1}XB^{t}=-I_7 \implies \det(A^{-1}XB^{t})=\det(-I_7 )$$</span></p>
<p><span class="math-container">$$\color{gold}{\Large{?}}$$</span></p>
|
<p>I might be missing something here, but as far as I know:</p>
<p>$a = a' \Rightarrow f(a) = f(a')$</p>
<p>for all sets $A,B$; $a,a'\in A$ and functions $f : A\to B$.</p>
| 27
|
linear-algebra
|
About the matrix of two linear transformations
|
https://math.stackexchange.com/questions/704052/about-the-matrix-of-two-linear-transformations
|
<p>I have an exercise to answer, and I don't know if I've done it the right way. This is only a little part of the exercise, but I have to know if what I've done so far is correct. Here we go:</p>
<p>Let $V$ be a $K$-vector space and $\dim(V)=4$. Let $B_{1}=(u_1,u_2,u_3,u_4)$ be a basis of $V$.
Let $W$ be a $K$-vector space and $\dim(W)=3$. Let $B_{2}=(v_1,v_2,v_3)$ be a basis of $W$.</p>
<p>Let $f,g$ be linear transformations such that:</p>
<p>\begin{equation*}
f : V \rightarrow W ,\\
g : W \rightarrow W ,
\end{equation*}</p>
<p>defined by $f(\lambda_1u_1+\lambda_2u_2+\lambda_3u_3+\lambda_4u_4)=(2\lambda_1+4\lambda_2+5\lambda_3-\lambda_4)v_1+(-\lambda_1+\lambda_2-\lambda_3-\lambda_4)v_2+(\lambda_1+\lambda_2+2\lambda_3+a\lambda_4)v_3$</p>
<p>and</p>
<p>$g(\mu_1v_1+\mu_2v_2+\mu_3v_3)=(\mu_1+3\mu_2+2\mu_3)v_1+(2\mu_1+\mu_2+3\mu_3)v_2+(3\mu_1+2\mu_2+\mu_3)v_3$</p>
<p>with $a \in K$.</p>
<p>Now I have to find the matrix of $f$ with the basis $B_1$ on the start and $B_2$ on the end, this is:$ M(f,B_1,B_2)$, and the matrix of $g$ with the basis $B_2$ on the start and $B_2$ on the end, this is:$ M(g,B_2,B_2)$. and then, the matrix of $g \circ f$ on $B_1$ and $B_2$.</p>
<p>What I have done is, for example with $f$ (with $g$ I think that I have to do it the same way), to write it like this, using that $f$ is linear:</p>
<p>$\lambda_1f(u_1)+\lambda_2f(u_2)+\lambda_3f(u_3)+\lambda_4f(u_4)= \lambda_1(2v_1-v_2+v_3)+\lambda_2(4v_1+v_2+v_3)+\lambda_3(5v_1-v_2+2v_3)+\lambda_4(-v_1-v_2+av_3)$,</p>
<p>and then I saw the relation $f(u_1)=2v_1-v_2+v_3, f(u_2)=4v_1+v_2+v_3, \dots$</p>
<p>Is that right? Thank you!</p>
|
<p>You can find the matrix of the maps with respect to the bases by both ways: if you know how coordinates of the result can be obtained from the coordinates of an input, and if you know how elements of the basis in the domain are mapped to vectors in the codomain (that is what you have done).</p>
<p>This is a matter of convention though the convention is almost uniform. It is that a vector with coordinates is expressed as the row of the coordinates, for instance,
$$\vec x=\left(\begin{matrix}\lambda_1\\\lambda_2\\\lambda_3\\\lambda_4\end{matrix}\right),\vec y=\left(\begin{matrix}\mu_1\\\mu_2\\\mu_3\end{matrix}\right)$$
and in order to get $\vec y$ from $\vec x$, you just multiply it by the matrix:
$\vec y=A\vec x.$ The matrix has dimension of $3\times4$, so when you can multiply it by a $4$-row from the right to get a $3$-row.</p>
<p>The actual formula should be known, but I will write it down $$\mu_i=a_{i1}\lambda_1+a_{i2}\lambda_2+a_{i3}\lambda_3+a_{i4}\lambda_4,$$
where $i=1,2,3,4.$</p>
<p>The convention about the bases is different when you want to know how the bases are mapped you use the column of the elements of the bases and you multiply the matrix from the left:
$$(v_1,v_2,v_3)=(u_1,u_2,u_3,u_4)A.$$
And the explicit formula is $$u_j=a_{1j}v_1+a_{2j}v_2+a_{3j}v_3.$$</p>
<p>Thus, the first method involving coordinates describes columns of the matrix by the coefficients before the coordinates. The second method shows rows as coordinates before the vectors of the basis.</p>
| 28
|
linear-algebra
|
Linear Independence and Subset Relations
|
https://math.stackexchange.com/questions/1418113/linear-independence-and-subset-relations
|
<p>I've been reading the <a href="https://en.wikibooks.org/wiki/Linear_Algebra/Definition_and_Examples_of_Linear_Independence" rel="nofollow noreferrer">wikibook</a> on Linear Algebra and in the section 'Linear Independence and Subset Relations' it defines the following lemma:</p>
<blockquote>
<p>Lemma 1.14:
Any subset of a linearly independent set is also linearly independent. Any superset of a linearly dependent set is also linearly dependent.</p>
</blockquote>
<p>The following is my proof of the statement <em>"Any subset of a linearly independent set is also linearly independent"</em>:
<span class="math-container">$$S= \{v_1, \dots, v_n\}\space |\space a_1\cdot v_1 +\space ... +\space a_n\cdot v_n =0$$</span>
Let <span class="math-container">$$S_i=S-\{v_i\}$$</span>
<span class="math-container">$S_i$</span> is linearly dependant if <span class="math-container">$$\sum_{j=1,\space j\neq i}^n (b_j\cdot v_j)$$</span></p>
<p>That is:</p>
<p><span class="math-container">$$\sum_{j=1}^n (b_j\cdot v_j) - 0\cdot v_i$$</span>
Let <span class="math-container">$a_i = b_i$</span> and using the constraint in the definition of <span class="math-container">$S$</span> we conclude <span class="math-container">$S_i$</span> is linearly independent.</p>
<hr />
<p>I'm uncertain about the validity/notation regarding my assignment of <span class="math-container">$a_i = b_i$</span> in the proof. Is there a more correct approach to this proof? I often find myself needing to make a set of 'placeholder' (for a lack of a better word) variables map to an equivalent set of variables and am concerned I'm doing it wrongly. The book's proof is simply 'This is clear'.</p>
<hr />
<h1>Second Attempt</h1>
<p>Let S be a linearly independent set of unique vectors <span class="math-container">$v_1,…,v_n$</span> such that <span class="math-container">$n\in Z$</span> and <span class="math-container">$n\geq1$</span>. Without a loss of generality let a set <span class="math-container">$U$</span> be a linearly dependent subset of <span class="math-container">$S$</span> such that <span class="math-container">$U= \{v_1,…,v_i \}$</span> for some <span class="math-container">$i<n$</span>. Because <span class="math-container">$U$</span> is a linearly dependent set, the element <span class="math-container">$v_1$</span> of <span class="math-container">$U$</span> can be written as a linear combination of the other elements of <span class="math-container">$U$</span> where coefficient <span class="math-container">$b_1\neq0$</span> and there must be coefficients which satisfy the equation other than the trivial case of <span class="math-container">$b_2=\dots =b_i=0$</span>.</p>
<p><span class="math-container">$$ v_1=(-b_2/b_1 )v_2+\dots +(-b_i/b_1 )v_i $$</span></p>
<p>Given the independence of <span class="math-container">$S$</span>, <span class="math-container">$v_1$</span> cannot be written as a non-trivial linear combination of vectors <span class="math-container">$v_1,\dots ,v_n$</span> such that coefficient <span class="math-container">$a_1\neq 0$</span> and not all coefficients <span class="math-container">$a_2,…,a_n$</span> are zero. That is:</p>
<p><span class="math-container">$$v_1\neq(-a_2/a_1 )v_2+\dots +(-a_n/a_1 )v_n$$</span></p>
<p>Expanding out this equation we have:</p>
<p><span class="math-container">$$v_1\neq(-a_2/a_1 )v_2+\dots +(-a_i/a_1 )v_i+(-a_n/a_1 )v_n$$</span></p>
<p>Using the dependence of <span class="math-container">$U$</span>, which states <span class="math-container">$v_1$</span> be written as a linear combination of the other elements of <span class="math-container">$U$</span> (i.e <span class="math-container">$\{v_2,…,v_i\}$</span>), where the vector coefficients are not all zero, we get:</p>
<p><span class="math-container">$$v_1\neq(v_1)+(-a_n/a_1 )v_n$$</span></p>
<p>Now given that not all the coefficients of <span class="math-container">$v_2,\dots,v_i$</span>, were zero we can have <span class="math-container">$a_n=0$</span> and still satisfy both conditions of the equation, namely that <span class="math-container">$a_1\neq 0$</span> and not all the coefficients are <span class="math-container">$0$</span>. This yields
<span class="math-container">$$v_1\neq v_1$$</span>
This cannot be true so therefore the assumption that <span class="math-container">$U$</span> is dependant must be wrong.</p>
|
<p>There are several mistakes in your proof. First of all, it is not clear what you want to say in your first line. You can assume
$$S= \{v_1, \dots, v_n\}\space$$
and assume it is linearly independent. But that does not tell you the equation you wrote. </p>
<p>I suppose you want to show it by contradiction. Since the question says "any subset", your assumption has to include a general subset. WLOG, we can assume the subset
$$\{v_{1}, \dots, v_{i}\}\subset S$$
is linearly dependent, for some $i<n$. So we have
$$b_1v_1+\cdots b_iv_i=0$$</p>
<p>for some constants $b_1, \dots, b_i$ which are not all zeros.</p>
<p>Now see if you can proceed from here. </p>
| 29
|
linear-algebra
|
If a lower triangular matrix is nonsingular, then its inverse is also lower triangular
|
https://math.stackexchange.com/questions/1425984/if-a-lower-triangular-matrix-is-nonsingular-then-its-inverse-is-also-lower-tria
|
<p>I already have the result that says that if <span class="math-container">$U$</span> is upper triangular and non singular then <span class="math-container">$U^{-1}$</span> is also upper triangular. I want to use this result to prove the result for lower triangular matrix <span class="math-container">$n \times n$</span>.</p>
<h3>TRY:</h3>
<p>Let <span class="math-container">$A$</span> be a lower triangular matrix which is invertible. Let <span class="math-container">$U = A^T$</span>. Then <span class="math-container">$U$</span> is upper triangular and invertible. hence <span class="math-container">$U^{-1}$</span> is upper triangular as well. In other words <span class="math-container">$(A^T)^{-1} = (A^{-1})^T $</span> is upper triangular. Taking the transpose of the transpose give us then that <span class="math-container">$A^{-1}$</span> must be lower triangular. Is this a valid proof ?</p>
| 30
|
|
linear-algebra
|
Prove that there exists a matrix $B$ s.t ker$B=$Im$A$, Im$B=$ker$A$
|
https://math.stackexchange.com/questions/1566424/prove-that-there-exists-a-matrix-b-s-t-kerb-ima-imb-kera
|
<blockquote>
<p>Let <span class="math-container">$A$</span> be a square matrix.</p>
<p>a) Show that there always exists a square matrix B such that Ker <span class="math-container">$B =$</span> Im <span class="math-container">$A$</span> and Ker<span class="math-container">$A =$</span> Im<span class="math-container">$B$</span>.</p>
</blockquote>
<p>Here is my approach:</p>
<p>Let <span class="math-container">$A:\ V \rightarrow P$</span>. Denote the basis of Ker<span class="math-container">$A$</span> by <span class="math-container">$u_{1}, \dots , u_{r}$</span> where <span class="math-container">$r\leq n$</span>, <span class="math-container">$n$</span> is the size of <span class="math-container">$A$</span>. Let <span class="math-container">$B$</span> be a matrix with <span class="math-container">$u_{1}, u_{2}, \dots , u_{r}$</span> as the first <span class="math-container">$r$</span> columns and the rest columns from <span class="math-container">$r$</span> to <span class="math-container">$n$</span> are some linear combinations of <span class="math-container">$u_{i}$</span>. Thus the first r columns of <span class="math-container">$B$</span> span a subspace of <span class="math-container">$V$</span>, this is by definition the image of <span class="math-container">$B, \ \rightarrow$</span> Im<span class="math-container">$B=$</span>Ker<span class="math-container">$A$</span>. From here its blank. Im not even sure if my first approach is correct.</p>
<p>I will be grateful for any hint. I want to prove this with matrices since next question is to prove that IF A is normal then B is normal aswell.</p>
|
<p>It is probably better to think in terms of linear maps instead of matrices. So you have a linear map $a$ on a vector space $V$. Let $v_{1}, \dots , v_{n}$ be a basis of $V$ such that $v_{1}, \dots, v_{k}$ are a basis of $\ker(a)$, so that $f(v_{k+1}), \dots , f(v_{n})$ is a basis of the image of $a$. Extend the $f(v_{i})$ to a basis $w_{1}, \dots, w_{k}, f(v_{k+1}), \dots , f(v_{n})$ of $V$, and define a linear map $b$ which is zero on the $f(v_{i})$ and maps each $w_{j}$ to $v_{j}$.</p>
| 31
|
linear-algebra
|
Find a basis for $U\cap V$
|
https://math.stackexchange.com/questions/1398300/find-a-basis-for-u-cap-v
|
<blockquote>
<p>Let <span class="math-container">$$a = (0,2,3,-1)^T \quad b=(0,2,7,-2)^T \quad c = (0,-2,1,0)^T \quad u = (1,2,0,1)^T\quad v = (2,2,1,2)^T$$</span>
Let <span class="math-container">$U= \langle a,b,c \rangle, V = \langle u,v\rangle$</span></p>
<p>Then a) find a basis for <span class="math-container">$U$</span> and <span class="math-container">$V$</span>, b) find a basis for <span class="math-container">$U\cap V$</span>.</p>
</blockquote>
<p>I have solved a) and found a basis <span class="math-container">$\{a,b\}$</span> for <span class="math-container">$U$</span> and <span class="math-container">$\{u,v\}$</span> for V. But How can I combine this information to find a basis for <span class="math-container">$U\cap V$</span>?</p>
<p>I have tried:</p>
<p>Let <span class="math-container">$x \in U\cap V$</span>, then <span class="math-container">$x = \lambda_1 a+\lambda_2 b = \mu_1u+\mu_2 v$</span>.
Solving <span class="math-container">$\lambda_1 a + \lambda_2 b - \mu_1 u - \mu_2 v = 0$</span> results in</p>
<p><span class="math-container">$$\lambda_1 = -2r\quad \lambda_2 = r\quad \mu_1 = -2r\quad \mu_2 = r\quad (\forall r \in \mathbb{Q})$$</span></p>
<p>This means <span class="math-container">$a,b,u, v$</span> are lineair dependent. But now what? I know I can remove one basisvector and this would result in set of independent basis vectors, but doesn't that break the spanning property?</p>
<p>I'm somewhat confused, could someone <strong>clarify</strong>, <strong>no</strong> solutions please.</p>
|
<p>Now you have two ways of expressing your vector $x$ in terms of a single arbitrary value $r$. Since $x$ was arbitrary any element of the intersection has this form. So use one of the two ways to find a fixed vector $w$ such that your arbitrary $x = rw$ for some $r$. As a check to your work so far, you should get the same $w$ from using either $a, b$ or from $u, v$, (up to a constant multiplier). Now can you figure out what the basis of $U \cup V$ is?</p>
| 32
|
linear-algebra
|
linear transformation that Im(T)=Ker(T)
|
https://math.stackexchange.com/questions/1429613/linear-transformation-that-imt-kert
|
<blockquote>
<p>Let <span class="math-container">$T:\mathbb{R}^2\rightarrow \mathbb{R}^2$</span> such that <span class="math-container">$T(x,y)=(2x-3y,\alpha x+\beta y)$</span> and <span class="math-container">$Ker(T)=Im(T)$</span></p>
<p>find <span class="math-container">$\alpha,\beta$</span></p>
</blockquote>
<p>How should I approach this?</p>
|
<p>You know that $T(1,0)=(2,\alpha)$ and $T(0,1)=(-3,\beta)$ belong to $\operatorname{Im} T$. Thus they also belong to $\operatorname{Ker} T$.</p>
<p>What can you deduce from the fact that $T(2,\alpha)=(0,0)$? Similarly, can you say something about $\beta$ from $T(-3,\beta)=(0,0)$?</p>
| 33
|
linear-algebra
|
IF $A$ is similar to $B$, then $A^{-1}$ is similar to $B^{-1}$
|
https://math.stackexchange.com/questions/1491948/if-a-is-similar-to-b-then-a-1-is-similar-to-b-1
|
<p>Suppose <span class="math-container">$A$</span> is similar to <span class="math-container">$B$</span> (That is: there is some nonsingular <span class="math-container">$C$</span> such that <span class="math-container">$B = C^{-1} A C $</span>). If <span class="math-container">$A$</span> is nonsingular, show that <span class="math-container">$B$</span> is also nonsingular and that <span class="math-container">$A^{-1}$</span> is similar to <span class="math-container">$B^{-1}$</span></p>
<h3>Attempt:</h3>
<p>Suppose <span class="math-container">$A$</span> is similar to <span class="math-container">$B$</span>. So, can find nonsingular <span class="math-container">$C$</span> with <span class="math-container">$B = C^{-1} A C $</span>. IF <span class="math-container">$A$</span> is nonsingular, then <span class="math-container">$B$</span> must be nonsingular since product of nonsingular matrices is nonsingular. Taking inverses, we have <span class="math-container">$B^{-1} = (C^{-1} A C )^{-1} = C^{-1} A^{-1} C $</span> and so <span class="math-container">$B^{-1}$</span> and <span class="math-container">$A^{-1}$</span> are similar.</p>
<p>IS this a correct solution ?</p>
|
<p>Right idea, but $(C^{-1} A C )^{-1} = C^{-1} A^{-1} C$ might need to be justified a little more. You can't just distribute the exponent if that's what you were doing. It's not hard to show that you take the inverse of a product of matrices in reverse order, and that $(C^{-1})^{-1}=C$.</p>
<p>Otherwise, this is fine.</p>
| 34
|
linear-algebra
|
Proving that a matrix is skew Hermitian
|
https://math.stackexchange.com/questions/1496710/proving-that-a-matrix-is-skew-hermitian
|
<p>Suppose <span class="math-container">$A \in \mathbb{C}^{n \times n} $</span> is skew hermitian: <span class="math-container">$A^* = -A$</span>. Suppose <span class="math-container">$B$</span> is unitarily similar to <span class="math-container">$A$</span>: That is there is some unitary matrix <span class="math-container">$Q$</span> such that <span class="math-container">$A = Q^{-1} B Q $</span>. I want to show that <span class="math-container">$B$</span> must be skew hermitian</p>
<h3>Attempt;</h3>
<p>We know <span class="math-container">$Q^* = Q $</span> as <span class="math-container">$Q$</span> is unitary. Also, we can write <span class="math-container">$B = QAQ^{-1}$</span> and so</p>
<p><span class="math-container">$$ B^* = (QAQ^{-1})^* = Q^{-1} (-A) Q = - Q^{-1} A Q $$</span></p>
<p>Can we assume that <span class="math-container">$Q^{-1} = Q $</span> as well? so that we can get <span class="math-container">$-B$</span> on the right hand side?</p>
| 35
|
|
linear-algebra
|
Basis and dimensions
|
https://math.stackexchange.com/questions/1459238/basis-and-dimensions
|
<p>How do i find the a basis and dimension for $A[x]$?</p>
<p>Consider the subset of $R[x]$ given by $A[x]:=\{q(x)$ element of $\mathbb R_4[x]$ such that $q(2)=0=q(-3)\}$</p>
<p>I'm a bit confused because there are two conditions to be satisifed, $q(2)=0=q(-3)$</p>
|
<p>I would do the following: one knows that a polynomial vanishes on a point $a$ iff it is a multiple of $(x-a)$. Because $(x-2)$ and $(x+3)$ are coprime polynomials, a polynomial vanishes on both $2$ and $-3$ iff it is a multiple of $(x-2)(x+3)$.</p>
<p>So, the subspace you're looking at is the space of polynomials of the form $(x-2)(x+3)p(x)$ of degree $4$. This last condition is equivalent to $\deg p \leq 2$.</p>
<p>In conclusion, (using my favourite basis for the space of degree $\leq 2$ polynomials), $(x-2)(x+3)$, $(x-2)(x+3) x$ and $(x-2)(x+3)x^2$ form a basis of your subspace.</p>
| 36
|
linear-algebra
|
Explanation of notation - Linear Algebra
|
https://math.stackexchange.com/questions/1522119/explanation-of-notation-linear-algebra
|
<p>I'm reading the following piece of text:</p>
<blockquote>
<p>Let <span class="math-container">$T: V \to W$</span> and <span class="math-container">$S: U \to V$</span> be two linear transformations between vector spaces <span class="math-container">$U, V, W$</span> of finite dimension.</p>
<p>Since <span class="math-container">$S(U) \subset V, T(S(U)) \subset T(V)$</span>, i.e. <span class="math-container">$R(T \circ S) \subset R(T)$</span>. So rank<span class="math-container">$(T \circ S) \leq$</span> rank<span class="math-container">$(S)$</span></p>
</blockquote>
<p>What does <span class="math-container">$R(T)$</span> signify here? What does the <span class="math-container">$R$</span> mean? The row space?</p>
<p>The source of the text: <a href="http://www.math.ualberta.ca/%7Exichen/math22514w/20140212_printable.pdf" rel="nofollow noreferrer">http://www.math.ualberta.ca/~xichen/math22514w/20140212_printable.pdf</a></p>
<p>Section: "Ranks of compositions of linear transformations"</p>
|
<p>From context and common sense I am almost sure that $R(T)$, for example, refers to the range of a linear map $T$.</p>
<p>In Friedberg's linear algebra, for instance, use $N(T)$ to denote the zero set of $T$ and $R(T)$ the range of $T$.</p>
| 37
|
linear-algebra
|
Trying to establish a norm inequality
|
https://math.stackexchange.com/questions/1469821/trying-to-establish-a-norm-inequality
|
<p>Let <span class="math-container">$A$</span> be <span class="math-container">$n $</span> by <span class="math-container">$n$</span> matrix and say <span class="math-container">$A = LU $</span> is the LU factorization of <span class="math-container">$A$</span>. Suppose <span class="math-container">$|l_{ij}| \leq 1 $</span>, show that <span class="math-container">$||U||_{\infty} \leq 2^{n-1} ||A||_{\infty} $</span>. Where <span class="math-container">$\|A\|_{\infty} =\max_{1 \leq i \leq n}\sum_{j=1}^{n}|a_{ij}|.$</span></p>
<h3>TRY:</h3>
<p>Suppose <span class="math-container">$a_i^T$</span> and <span class="math-container">$u_i^T$</span> are the ith rows of <span class="math-container">$A$</span> and <span class="math-container">$U$</span>. If we compute <span class="math-container">$LU$</span>, we obtain that</p>
<p><span class="math-container">$$ u_i^T = a_i^T - \sum_{j=1}^{i-1} l_{ij} u_j^T$$</span></p>
<p>With this, I tried the following. We know <span class="math-container">$||U||_{\infty} = \max_{1 \leq k \leq n} | \sum u_k^T | $</span>. Suppose <span class="math-container">$||U||_{\infty} = | \sum u_k^T | $</span>. Then</p>
<p><span class="math-container">$$ | \sum u_k^T | \leq \sum |u_k^T | = \sum \left|a_k^T - \sum_{j=1}^{k-1} l_{kj} u_j^T \right| \leq \sum |a_k^T| + \left| \sum_{j=1}^{k-1} l_{kj}u_j^T \right|$$</span></p>
<p>by hypthesis, last term is less than</p>
<p><span class="math-container">$$ \sum \left( |a_k^T| + \sum_{j=1}^{k-1} |u_j^T| \right) $$</span></p>
<p>but, then here I am stuck. Perhaps I am on the wrong track on proving this? Any help would be greatly appreciated.</p>
|
<p>Let $A$ be an $n\times n$ matrix and suppose that $A$ has an LU factorization $A = LU$. Since $L$ is invertible and $\|\cdot\|_\infty$ is submultiplicative it follows that</p>
<p>$$
\|L^{-1}A\|_\infty = \|U\|_\infty \implies \|U\|_\infty \leq \|L^{-1}\|_\infty\|A\|_\infty
$$</p>
<p>By the process of Gaussian elimination $L^{-1} = L_{n-1}\cdots L_1$, where</p>
<p>$$
L_i = \left[
\begin{array}{cccccc}
1 & & & & & 0\\
& \ddots & & & &\\
& & 1 & & &\\
& & l_{i+1,i} & \ddots & &\\
& & \vdots & & \ddots &\\
0 & & l_{n,i} & & & 1\\
\end{array}
\right]
$$</p>
<p>for $i = 1,\ldots,n-1$. By properties of the matrices $L_i$ and by the assumption that all elements of $L$ are at most $1$ in magnitude, it follows that $\|L_i\|_\infty \leq 2$. Again using the submultiplicative property of $\|\cdot\|_\infty$ we have</p>
<p>$$
\|L^{-1}\|_\infty \leq \|L_{n-1}\|_\infty\cdots\|L_1\|_\infty \leq 2^{n-1}
$$</p>
<p>Therefore, we may conclude that $\|U\|_\infty \leq 2^{n-1}\|A\|_\infty$, as desired.</p>
| 38
|
linear-algebra
|
Consider the basis $B = \{(1, 2), (3, 4)\}$. Suppose $[x]_B =(7, 11)$ for some $x \in \mathbb R^2.$ Find $x$ with respect to $\mathcal E_2.$
|
https://math.stackexchange.com/questions/1610907/consider-the-basis-b-1-2-3-4-suppose-x-b-7-11-for-some
|
<blockquote>
<p><span class="math-container">$\mathcal E_i$</span> denotes the standard basis.</p>
<p><span class="math-container">$[x]_B$</span> denotes the the coordinate vector with respect to the basis <span class="math-container">$B$</span>.</p>
</blockquote>
<p><span class="math-container">$a(1, 0) + b(0, 1) = (x, y) \implies a = x, b = y$</span>.</p>
<p>So, <span class="math-container">$(x, y)$</span> is the coordinate vector of <span class="math-container">$(x, y)$</span> with respect to standard basis. Since <span class="math-container">$(x, y) = (7, 11)$</span>, we have that <span class="math-container">$(7, 11)$</span> is the coordinate vector of itself with respect to the standard basis.</p>
<p>Is this correct?</p>
|
<p>IMO it's best not to think of your vectors as elements of $\Bbb R^2$ -- I just don't find there to be any motivation for change of basis in $\Bbb R^n$. Instead just take your vectors as abstract objects which follow some easy rules and let the algebra take care of everything.</p>
<p>First we need to give some names to each of the four unnamed vectors in your question (two are implicit). So let $(1,2) = [a_1]_{\mathcal E_2}$, $(3,4) = [a_2]_{\mathcal E_2}$, and the two standard basis vectors be denoted $e_1$ and $e_2$. Then </p>
<p>$$[x]_B =(7, 11) \iff x = 7a_1+11a_2 \\ [a_1]_{\mathcal E_2} = (1,2) \iff a_1 = e_1 + 2e_2 \\ [a_2]_{\mathcal E_2} = (3,4) \iff a_2 = 3e_1 + 4e_2$$</p>
<p>Then just plugging in we get $$x = 7a_1 + 11a_2 = 7(e_1+2e_2) + 11(3e_1 + 4e_2) = 40e_1 + 58e_2 \iff \color{blue}{\require{enclose}\enclose{box}{[x]_{\mathcal E_2} = (40,58)}}$$</p>
| 39
|
linear-algebra
|
Linear operator and its corresponding matrix.
|
https://math.stackexchange.com/questions/1828644/linear-operator-and-its-corresponding-matrix
|
<blockquote>
<p>There's linear operator <span class="math-container">$A: \mathbb{R}_2[x] \to \mathbb{R}_2[x]$</span> defined as <span class="math-container">$(A(p))(x):=p'(x+1)$</span>.</p>
<p>Find all possible values for <span class="math-container">$a, b, c \in \mathbb{R}$</span> for which matrix <span class="math-container">$\begin{bmatrix} a & 1 & 0 \\ b & 0 & 1 \\ c & 0 & 0 \end{bmatrix} $</span> can be matrix of this linear operator with respect to some basis of space <span class="math-container">$\mathbb{R}_2[x]$</span>.</p>
</blockquote>
<p>Let's take arbitrary element of <span class="math-container">$\mathbb{R}_2[x]$</span>, that would be polynomial <span class="math-container">$p(x)=dx^2 + ex + f$</span>, where <span class="math-container">$d, e, f \in \mathbb{R}$</span>. Now, <span class="math-container">$p'=2dx + e$</span> and <span class="math-container">$p'(x+1)=2dx^2 + (2d + e)x +e$</span> which means that this operator actually does following mapping <span class="math-container">$(d,e,f) \mapsto (2d,2d+e,e)$</span>. but looking at matrix, there's no such ordered pair of three numbers that satisfies <span class="math-container">$(d,e,f)\mapsto (1, 0 , 0)$</span>. What am i doing wrong here, and what is the correct way to solve this?</p>
|
<p>The matrix of your operator (which I'll rename to $T$ in order not to get confused with matrices) with respect to the basis $\mathcal{B} = (1,x,x^2)$ is given by</p>
<p>$$ [T]_{\mathcal{B}} = A = \begin{pmatrix} 0 & 1 & 2 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix}. $$</p>
<p>Call your other matrix $B$. The operator $T$ will be represented by $B$ if and only if the matrix $B$ is similar to $A$. The characteristic polynomial of $A$ is $x^3$ (as $A$ is nilpotent). The characteristic polynomial of $B$ is</p>
<p>$$ \det(xI - A) = \det \begin{pmatrix} x - a & -1 & 0 \\ -b & x & -1 \\ -c & 0 & x \end{pmatrix} = (x-a)x^2 +(-bx - c) = x^3 - ax^2 - bx - c. $$</p>
<p>Since similar matrices have the same characteristic polynomial, we must have $a = b = c = 0$. Finally, you can check that indeed</p>
<p>$$ A = \begin{pmatrix} 0 & 1 & 2 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix}, \,\,\, B = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} $$</p>
<p>are similar. An explicit basis $\mathcal{C}$ with respect to which $[T]_{\mathcal{C}} = B$ is given by $\mathcal{C} = \left(1, x, \frac{x^2}{2} \right)$.</p>
<p>Actually, you can do this without any calculations. The operator $T$ is readily seen to be nilpotent and so $a = 0$ (as we must have $\operatorname{trace}(B) = a = 0$). It has one dimensional kernel which immediately implies that $b = c = 0$.</p>
<hr>
<p>The solution above assumed that $T(x) = p'(x+1)$. If, instead, $T(x) = p'(x) \cdot (x + 1)$ then</p>
<p>$$ [T]_{\mathcal{B}} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 1 & 2 \\ 0 & 0 & 2 \end{pmatrix}. $$</p>
<p>The matrix $[T]_{\mathcal{B}}$ is upper triangular and so its eigenvalues are $\lambda = 0,1,2$. By comparing characteristic polynomials we see that</p>
<p>$$ x(x-1)(x-2) = (x^2 - x)(x-2) = x^3 - 3x^2 + 2x = x^3 - ax^2 - bx - c $$</p>
<p>so $a = 3, b = -2, c = 0$ and </p>
<p>$$ B = \begin{pmatrix} 3 & 1 & 0 \\ -2 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}. $$</p>
<p>The matrices $A$ and $B$ are similar and so there must exist a basis $\mathcal{C}$ such that $[T]_{\mathcal{C}} = B$. If $P^{-1}BP = A$ then the columns of $P$ represent the basis $\mathcal{C}$ with respect to the basis $\mathcal{B}$ (that is, $P = [\operatorname{id}]_{\mathcal{C}}^{\mathcal{B}}$).</p>
| 40
|
linear-algebra
|
How to solve linear equation using inversion method?
|
https://math.stackexchange.com/questions/1559659/how-to-solve-linear-equation-using-inversion-method
|
<p>I do not understand the inversion method to solve a pair of linear equations:</p>
<blockquote>
2x<sub>1</sub> + 4x<sub>2</sub> = 4<br>
9x<sub>1</sub> + 3x<sub>2</sub> = 6
</blockquote>
<p>How to solve this? Please clarify steps.</p>
|
<p>Write in a matrix form $Ax=b$, i.e.
$$\pmatrix{2&4\\9&3}\pmatrix{x_1\\x_2}=\pmatrix{4\\6}$$
Using the following formula of inverse matrix
$$\pmatrix{a&b\\c&d}^{-1}=\frac{1}{a d-bc}\pmatrix{d&-b\\-c&a}$$
one get
$$x=\pmatrix{x_1\\x_2}=A^{-1}b=\frac{1}{2\cdot3-9\cdot4}\pmatrix{3&-4\\-9&2}\pmatrix{4\\6} =\\
-\frac{1}{30}\pmatrix{3\cdot4-4\cdot6\\-9\cdot4+ 2\cdot6}=
-\frac{1}{30}\pmatrix{-12\\-24}=\frac{1}{5}\pmatrix{2\\4} $$</p>
<p>Alternatively one obtain an inverse using matrix ranking
$$\pmatrix{2&4&|&1&0\\9&3&|&0&1}\rightarrow_{swap}\\
\pmatrix{9&3&|&0&1\\2&4&|&1&0}\rightarrow_{\text{second row}\times 4.5}\\
\pmatrix{9&3&|&0&1\\9&18&|&4.5&0}\rightarrow_{row_2=row_2-row_1}\\
\pmatrix{9&3&|&0&1\\0&15&|&4.5&-1}\rightarrow_{row_1=row_1/9,row_1=row_1/15}\\
\pmatrix{1&1/3&|&0&1/9\\0&1&|&3/10&-1/15}\rightarrow_{row_1=row_1-row_2/3}\\
\pmatrix{1&0&|&-1/10&2/15\\0&1&|&3/10&-1/15}
$$
Thus
$$
A^{-1}=\pmatrix{-1/10&2/15\\3/10&-1/15}=\frac{1}{30}\pmatrix{-3&4\\9&-2}
$$</p>
| 41
|
linear-algebra
|
Solve the system (3)
|
https://math.stackexchange.com/questions/1839440/solve-the-system-3
|
<blockquote>
<p>Solve the system</p>
<p><span class="math-container">$x_1 + x_2 -3x_3 = -2$</span></p>
<p><span class="math-container">$4x_1 + 3x_2 + 3x_3 = 2$</span></p>
<p><span class="math-container">$\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} = \begin{bmatrix}\\\\\\\end{bmatrix} + \begin{bmatrix}\\\\\\\end{bmatrix} s $</span></p>
</blockquote>
<p>Do I need to put this in <strong>RREF?</strong></p>
<p>Or how should I go about doing this?</p>
|
<p>The most straight-forward way about solving this is to take the 2 equations and set them up as an <a href="https://en.wikipedia.org/wiki/Augmented_matrix" rel="nofollow">Augmented Matrix</a> and get <em>RREF</em> like so:</p>
<p>$\left[\begin{array}{ccc|c}
1 & 1 & -3 & -2\\
4 & 3 & 3 &2
\end{array}\right] \longrightarrow RREF \longrightarrow
\left[\begin{array}{ccc|c}
1 & 0 & 12 & 8\\
0 & -1 & 15 & 10
\end{array}\right] $ </p>
<p>Being that you have 2 leading variables the system of equations is dependent and write the system like so:</p>
<p>$x_1+12x_3=8$</p>
<p>and</p>
<p>$-x_2+15x_3=10$ </p>
<p>let $x_3=s$ and rewrited the sytem as</p>
<p>$\left[\begin{array} \\x_1\\ x_2\\ x_3\\ \end{array}\right]= \left[\begin{array} \\ 8 \\ -10 \\ 0 \end{array}\right]+ \left[\begin{array} \\ -12\\ 15\\1 \end{array}\right]s$</p>
<p>and there you have it!</p>
| 42
|
linear-algebra
|
Which of the following subsets of $\mathbb{R}^{3\times3}$ are subspaces of $\mathbb{R}^{3\times3}$
|
https://math.stackexchange.com/questions/1980454/which-of-the-following-subsets-of-mathbbr3-times3-are-subspaces-of-mat
|
<blockquote>
<p>Which of the following subsets of <span class="math-container">$\mathbb{R}^{3 \times 3}$</span> are subspaces of <span class="math-container">$\mathbb{R}^{3 \times 3}$</span>?</p>
<p>A. The <span class="math-container">$3 \times 3$</span> matrices with determinant 0<br/>
B. The <span class="math-container">$3 \times 3$</span> matrices whose entries are all integers<br/>
C. The invertible <span class="math-container">$3 \times 3$</span> matrices<br/>
D. The <span class="math-container">$3 \times 3$</span> matrices with all zeros in the first row<br/>
E. The diagonal <span class="math-container">$3 \times 3$</span> matrices<br/>
F. The symmetric <span class="math-container">$3 \times 3$</span> matrices</p>
</blockquote>
<p>I answered B, D, E and F, but it appears to be incorrect. How so?</p>
|
<p>(B) is false, since there are $\;3\times3\;$ integer matrices which multiplied by $\;\frac12\;$ aren't integer anymore (example?)</p>
<p>$$$$</p>
| 43
|
linear-algebra
|
Bases for space of polynomials
|
https://math.stackexchange.com/questions/1756963/bases-for-space-of-polynomials
|
<p>I'm facing an exercise to determine basis for some spaces of polynomials. Here they are</p>
<blockquote>
<p>Consider the space of polynomials of degree equal or less than 3</p>
<p><span class="math-container">$U =$</span>{<span class="math-container">$p(t) \in \mathbb{R_3}[t]$</span> | <span class="math-container">$p(0)=0$</span>}</p>
<p><span class="math-container">$U =$</span>{<span class="math-container">$p(t) \in \mathbb{R_3}[t]$</span> | <span class="math-container">$p(1)=0$</span>}</p>
<p><span class="math-container">$U =$</span>{<span class="math-container">$p(t) \in \mathbb{R_3}[t]$</span> | <span class="math-container">$p(0)=p(1)$</span>}</p>
</blockquote>
<p>So my answer was <span class="math-container">$(x^3,x^2,x)$</span> for the first one <span class="math-container">$(x^3-1,x^2-1,x-1$</span>) for the second one and <span class="math-container">$(x^3-x, x^2-x, 1)$</span> for the last one.
When I checked the key for this question I found out they erased the vectors with <span class="math-container">$x^3$</span>...</p>
<p>Can someone explain me why?</p>
<p>I tried to write my vectors in vectores of coordinates through the canonical basis and then apply Gauss elimination but I didn't reach to linearly dependent vectors so I think we can't erase one, can we?</p>
<p>Thanks!</p>
|
<p>The maps $f,g,h\colon \mathbb{R}_3[x]\to\mathbb{R}$ defined by
\begin{align}
f(p)&=p(0)\\
g(p)&=p(1)\\
h(p)&=p(1)-p(0)
\end{align}
are easily seen to be linear and surjective. So their kernels (null spaces) have dimension $3$. The kernels are precisely the subspaces you have to find bases of, in the same order.</p>
<p>Since clearly all three subspaces contain polynomials of degree $3$ (and you found them), a bases for each of them <em>must</em> contain a polynomial of degree $3$.</p>
<hr>
<p>Your solution is, as far as I can see, correct, but let's check it.</p>
<p>The first set is clearly contained in $\ker f$ and also linearly independent. The second set is contained in $\ker g$ and the matrix of coordinates with respect to the basis $\{x^3,x^2,x,1\}$ is
$$
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
-1 & -1 & -1 \\
\end{bmatrix}
$$
that's easily seen to have rank $3$, so the set is linearly independent.</p>
<p>The third set is contained in $\ker h$ and the matrix is
$$
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
-1 & -1 & 0 \\
0 & 0 & 1
\end{bmatrix}
$$
that has rank $3$.</p>
| 44
|
linear-algebra
|
If you add row $1$ of $A$ to row $2$ to get $B$, how do you find ${ B }^{ -1 }$ from ${ A}^{ -1 }$?
|
https://math.stackexchange.com/questions/1765374/if-you-add-row-1-of-a-to-row-2-to-get-b-how-do-you-find-b-1
|
<blockquote>
<p>If you add row <span class="math-container">$1$</span> of <span class="math-container">$A$</span> to row <span class="math-container">$2$</span> to get <span class="math-container">$B$</span>, how do you find <span class="math-container">${ B }^{ -1 }$</span> from <span class="math-container">${ A}^{ -1 }$</span>?</p>
<p>Notice the order. The inverse of <span class="math-container">$B=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} A \end{bmatrix}$</span> is ___.</p>
</blockquote>
<p>I didn't really know how to approach this question so I just tried to manipulate things at first.</p>
<p>This is what I did:</p>
<p><span class="math-container">$$\\ B=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} { a }_{ 1,1 } & { a }_{ 1,2 } \\ { a }_{ 2,1 } & { a }_{ 2,2 } \end{bmatrix}=\begin{bmatrix} { a }_{ 1,1 } & { a }_{ 1,2 } \\ { { a }_{ 1,1 }+a }_{ 2,1 } & { a }_{ 1,2 }+{ a }_{ 2,2 } \end{bmatrix}$$</span></p>
<p><span class="math-container">$${ \Rightarrow B }^{ -1 }=\frac { 1 }{ ({ a }_{ 1,1 }{ a }_{ 1,2 }+{ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 1,1 }+{ a }_{ 1,2 }{ a }_{ 2,1 }) } =\begin{bmatrix} { a }_{ 1,1 } & { a }_{ 1,2 } \\ { { a }_{ 1,1 }+a }_{ 2,1 } & { a }_{ 1,2 }+{ a }_{ 2,2 } \end{bmatrix}$$</span></p>
<p><span class="math-container">$${ \Rightarrow B }^{ -1 }=\frac { 1 }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } =\begin{bmatrix} { a }_{ 1,2 }+{ a }_{ 2,2 } & { -a }_{ 1,2 } \\ { { -a }_{ 1,1 }-a }_{ 2,1 } & { a }_{ 1,1 } \end{bmatrix}$$</span></p>
<p><span class="math-container">$${ \Rightarrow B }^{ -1 }=\begin{bmatrix} \frac { { a }_{ 1,2 }+{ a }_{ 2,2 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } & \frac { { -a }_{ 1,2 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } \\ \frac { { { -a }_{ 1,1 }-a }_{ 2,1 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } & \frac { { a }_{ 1,1 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } \end{bmatrix}$$</span></p>
<p>Now, the inverse of A would be:</p>
<p><span class="math-container">$${ A }^{ -1 }=\frac { { a }_{ 1,1 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } \begin{bmatrix} { a }_{ 2,2 } & { -a }_{ 1,2 } \\ { -a }_{ 2,1 } & { a }_{ 1,1 } \end{bmatrix}=\begin{bmatrix} \frac { { a }_{ 2,2 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } & \frac { { -a }_{ 1,2 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } \\ \frac { { -a }_{ 2,1 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } & \frac { { a }_{ 1,1 } }{ ({ a }_{ 1,1 }{ a }_{ 2,2 })-({ a }_{ 1,2 }{ a }_{ 2,1 }) } \end{bmatrix}$$</span></p>
<p>So, in order to find <span class="math-container">${ B }^{ -1 }$</span> from <span class="math-container">${ A}^{ -1 }$</span>, I would have to subtract column <span class="math-container">$2$</span> of <span class="math-container">${ A}^{ -1 }$</span> from column <span class="math-container">$1$</span> of <span class="math-container">${ A}^{ -1 }$</span></p>
<p>This answer seems to correspond with the answer for this problem in the back of the textbook, but I believe that I just got lucky. After I finished writing this all out, I realized that I lost generality by assuming that <span class="math-container">$A$</span> is a <span class="math-container">$2$</span> by <span class="math-container">$2$</span> matrix. How could I solve this without losing generality and perhaps more efficiently?</p>
|
<p>The inverse of $AB$ is the reverse product ${ B }^{ -1 }{ A }^{ -1 }$.</p>
<p>So by applying this to $B=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} A \end{bmatrix}$, we get </p>
<p>$${ B }^{ -1 }={ A }^{ -1 }{ \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} }^{ -1 }$$</p>
<p>$$\Rightarrow { B }^{ -1 }={ A^{ -1 }\begin{bmatrix} 1 & 0 \\ -1 & 1 \end{bmatrix} }$$</p>
<p>Therefore, you can find ${ B }^{ -1 }$ by subtracting column $2$ of ${ A}^{ -1 }$ from column $1$ of ${ A}^{ -1 }$.</p>
| 45
|
linear-algebra
|
Give the matrix for the transformation T:
|
https://math.stackexchange.com/questions/1840777/give-the-matrix-for-the-transformation-t
|
<blockquote>
<p><span class="math-container">$a)$</span>Give the matrix for the Transformation <span class="math-container">$T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$</span> that first reflects points through the <span class="math-container">$x$</span> - axis and then reflects through the line <span class="math-container">$y = x$</span></p>
<p><span class="math-container">$b)$</span>How else can you describe this transformation?</p>
</blockquote>
<p>Wouldn't this matrix be (I'm guessing) <span class="math-container">$\begin{bmatrix}1&0\\0&1\end{bmatrix}$</span></p>
<p>I'm not really sure</p>
|
<p>It suffices to keep track of where $(1,0)$ and $(0,1)$ are mapped to. In particular, we have
$$
(1,0) \mapsto (1,0) \mapsto (0,1)\\
(0,1) \mapsto (0,-1) \mapsto (-1,0)
$$
So, the matrix of $T$ is the matrix with these columns. That is,
$$
[T] = \pmatrix{0&-1\\1&0}
$$
It is worth noting that this is in fact the counterclockwise rotation by $90^\circ$.</p>
<hr>
<p>We could also have expressed this composition of maps as a matrix multiplication. That is,
$$
\pmatrix{0&1\\1&0} \pmatrix{1&0\\0&-1} = \pmatrix{0&-1\\1&0}
$$</p>
| 46
|
linear-algebra
|
Prove, that for every pair $(a,b)$ there is another pair $(u,v)$ so that $a\cdot v = b \cdot u$. $a,b,c,d \in \mathbb{Z}^*$
|
https://math.stackexchange.com/questions/2048645/prove-that-for-every-pair-a-b-there-is-another-pair-u-v-so-that-a-cdot
|
<blockquote>
<p><span class="math-container">$\forall (a,b) a,b\in \mathbb{Z}^* \exists (u,v) u,v \in \mathbb{Z}^*:a\cdot v = b\cdot u$</span> <span class="math-container">$ \land gcd(u,v) =1 $</span></p>
<p>So basically I have to prove, that for every pair <span class="math-container">$(a,b)$</span> there is another pair <span class="math-container">$(u,v)$</span> so that <span class="math-container">$a\cdot v = b \cdot u$</span>.I thought about proving the logic opposite, using Euclids algorithm's and using equivalence via modulo 1. But I can't think of anything specific. It seems to me there is an idea behind this how to find a special <span class="math-container">$(u,v)$</span> (with assistance of named algorithms).</p>
<p>I hope you understand the problem and can provide hints or ideas how to approach this</p>
</blockquote>
|
<p>Hint: Try $u=a/d$, $v=b/d$ with $d=...$</p>
| 47
|
linear-algebra
|
Prove that $Av_i \bullet Av_j = v_i \bullet v_j$, $\forall i,j$
|
https://math.stackexchange.com/questions/1720707/prove-that-av-i-bullet-av-j-v-i-bullet-v-j-forall-i-j
|
<p>There are <span class="math-container">$2$</span> different bases in <span class="math-container">$R^2$</span>, <span class="math-container">$\{u_1,u_2\} , \{v_1,v_2\}$</span>. and <span class="math-container">$A$</span> is a matrix <span class="math-container">$nxn$</span>.</p>
<p>Is it possible to prove that</p>
<blockquote>
<p>If(<strong>Given</strong>) <span class="math-container">$Au_i \bullet Au_j = u_i \bullet u_j$</span>, <span class="math-container">$\forall i,j$</span>.</p>
<p>Then(<strong>Prove</strong>) <span class="math-container">$Av_i \bullet Av_j = v_i \bullet v_j$</span>, <span class="math-container">$\forall i,j$</span></p>
</blockquote>
<p>If yes, I'd like to know how can I prove that. I've no idea how to approach this.</p>
|
<p>Express $v_i=a_{i1}u_1+a_{i2}u_2$. Now by using that $\bullet$ is bilinear, i.e. $(a+b)\bullet c=a\bullet c+b\bullet c$ we get:</p>
<p>\begin{eqnarray}
Av_i\bullet Av_j&=&(a_{i1}Au_1+a_{i2}Au_2)\bullet (a_{j1}Au_1+a_{j2}Au_2)\\
&=& a_{i1}Au_1\bullet a_{j1}Au_1+a_{i2}Au_2\bullet a_{j1}Au_1+a_{i1}Au_1\bullet a_{j2}Au_2+a_{i2}Au_1\bullet a_{j2}Au_2\\
&=&a_{i1}u_1\bullet a_{j1}u_1+a_{i2}u_2\bullet a_{j1}u_1+a_{i1}u_1\bullet a_{j2}u_2+a_{i2}u_1\bullet a_{j2}u_2\\
&=&(a_{i1}u_1+a_{i2}u_2)\bullet a_{j1}u_1+(a_{i1}u_1+a_{i2}u_2)\bullet a_{j2}u_2\\
&=&(a_{i1}u_1+a_{i2}u_2)\bullet (a_{j1}u_1+a_{j2}u_2)\\
&=& v_i\bullet v_j
\end{eqnarray}
which is what we wanted.</p>
| 48
|
linear-algebra
|
Question on how to prove that a vector space is linear
|
https://math.stackexchange.com/questions/1777891/question-on-how-to-prove-that-a-vector-space-is-linear
|
<p>This is a past exam question that wasn't explained in my lecture notes:</p>
<blockquote>
<p>For vector spaces <span class="math-container">$U$</span> and <span class="math-container">$V$</span> over the same field of scalars <span class="math-container">$\mathbb{F}$</span></p>
<p>Let <span class="math-container">$U = V = P^{25}$</span> the space of real polynomial functions of degree up to <span class="math-container">$25$</span>. Verify that the mapping <span class="math-container">$ϕ$</span> sending <span class="math-container">$f ∈ P^{25}$</span> to <span class="math-container">$(x+3)\frac{df}{dx}+2$</span> is linear.</p>
</blockquote>
<p>What on earth does this question mean and how do i solve it?</p>
|
<p>Hint:</p>
<p>consider
$$
\phi(f+\alpha g)=(x+3)\frac{d}{dx}(f+\alpha g)+2
$$
where $\alpha\in \mathbb{K}$ and verify if it is an element of $P^{25}$ if $f,g \in P^{25}$ and if it is the same as:
$$
\phi(f)+\alpha \phi(g)=(x+3)\frac{df}{dx}+2+\alpha[(x+3)\frac{dg}{dx}+2 ]
$$</p>
<p>This proves linearity.</p>
| 49
|
linear-algebra
|
How does $A^{-1}Ax = b$ turn into $x = A^{-1}b$?
|
https://math.stackexchange.com/questions/1861830/how-does-a-1ax-b-turn-into-x-a-1b
|
<blockquote>
<p><strong>Background</strong>: Looking at properties and dealing with <a href="https://en.wikipedia.org/wiki/Matrix_(mathematics)" rel="nofollow noreferrer">Matrices</a> in linear algebra, and reading about <a href="http://mathworld.wolfram.com/MatrixInverse.html" rel="nofollow noreferrer">Matrix Inverse(s)</a></p>
<p><strong>Even More Background:</strong> I'm given a matrix <span class="math-container">$A$</span>, I found it's inverse i.e.: <span class="math-container">$A^{-1}$</span>, and now they haven me some <span class="math-container">$b$</span> vector and have told me to solve for <span class="math-container">$x$</span> since we have <span class="math-container">$Ax = b$</span></p>
<p><strong>Problem</strong>: I don't understand how <span class="math-container">$A^{-1}Ax = b$</span> turns into <span class="math-container">$x = A^{-1}b? $</span></p>
</blockquote>
<p>This may sound like a dumb question but I thought that <span class="math-container">$A * A^{-1} = I$</span>. I don't understand what they are doing here.</p>
<p>I tried dividing by an <span class="math-container">$A$</span></p>
<p><span class="math-container">$A^{-1}Ax/A = b/A \implies A^{-1}x = bA^{-1}$</span> which doesn't work...</p>
<p>I then tried dividing by a <span class="math-container">$A^{-1}$</span> which doesn't work either. What am I missing here. I know this is trivial</p>
|
<p>This looks like a typo to me.</p>
<p>What is true is that if $A$ is a matrix whose inverse $A^{-1}$ exists, we have</p>
<p>$$\begin{align*}Ax = b &\iff A^{-1} Ax = A^{-1} b \\
&\iff Ix = A^{-1}b\\
&\iff x = A^{-1}b\\\end{align*}$$</p>
<p>This is the matrix version of the usual rule with numbers: $ax = b \iff x = b/a$. With matrices, you can only "divide by" matrices that have an inverse. </p>
<p>You are exactly right that $A^{-1}A = I$; this is the fact I used when going from the second equation to the third.</p>
| 50
|
linear-algebra
|
Finding formulas for the entries of a matrix
|
https://math.stackexchange.com/questions/1863817/finding-formulas-for-the-entries-of-a-matrix
|
<blockquote>
<p>Let <span class="math-container">$M = \begin{bmatrix}8&2\\-1&5\end{bmatrix}$</span> Find formulas for the entries of <span class="math-container">$M^n$</span> where <span class="math-container">$n$</span> is a positive integer</p>
<p><span class="math-container">$M^n = ?$</span> (Should be a <span class="math-container">$2 \times 2$</span> matrix)</p>
</blockquote>
<p>What do they mean exactly?</p>
|
<p>Often when you want to take a high power of a matrix $A$, you do what's called diagonalization. That is, you find two matrices $M$ and $D$ where $D$ is diagonal and $A = M D M^{-1}$. Then, we have that $A^n = (M D M^{-1})^n = M D^n M^{-1}$. Taking the power of a diagonal matrix is easy, so this is often a nice way to do this.</p>
<p>To add to this, in fact, $M$ and $D$ aren't some arbitrary matrices, the digonal entries of $D$ are the eigenvalues of $A$, and $M$'s columns are the corresponding eigenvectors. This decomposition (known as diagonalization) cannot always be done, but in your case is possible. It's helpful to note that any symmetric matrix can always be decomposed this way (though your matrix isn't symmetric) and moreover can be decomposed in such a way that $M^{-1} = M^T$.</p>
| 51
|
linear-algebra
|
Determine for with values of $a$ the matrix is diagonalizable over $\mathbb{R}$
|
https://math.stackexchange.com/questions/1902987/determine-for-with-values-of-a-the-matrix-is-diagonalizable-over-mathbbr
|
<blockquote>
<p>Determine for which values of <span class="math-container">$a$</span> <span class="math-container">\begin{pmatrix} 4 & 0 & 0 \\ 4 & 4 & a \\ 4 & 4 & 4 \end{pmatrix}</span></p>
<p>The matrix is diagonalizable</p>
</blockquote>
<p>So we first look at the characteristic polynomial:</p>
<p><span class="math-container">$$ \begin{vmatrix} 4-\lambda & 0 & 0 \\ 4 & 4-\lambda & a \\ 4 & 4 & 4-\lambda \end{vmatrix}=(4-\lambda)\begin{vmatrix} 4-\lambda & a \\ 4 & 4-\lambda \end{vmatrix}=\\=(4-\lambda)[(4-\lambda)^2-4a]=(4-\lambda)[(\lambda^2-8\lambda+16-4a]=\\=4\lambda^2-32\lambda+64-16a-\lambda^3+8\lambda^2-16\lambda+4\lambda a=\lambda^3+12\lambda^2-48+64-16a+4\lambda a$$</span></p>
<p>How can I find the roots to this polynomial? usually with 3rd power polynomial I use to find the factor of the free element and test to find when the polynomial<span class="math-container">$=0$</span></p>
<p>What Should I do in this case?</p>
|
<p>You can do this without much computation.</p>
<p>First off, for $a=0$ the matrix is triangular (but not diagonal) with equal diagonal entries, therefore not diagonalisable.</p>
<p>Then assume $a\neq 0$. Now the matrix is block triangular, so its characteristic polynomial is the product of those of the diagonal blocks $D_1=(4)$ and $D_2=(\begin{smallmatrix}4&a\\4&4\end{smallmatrix})$. The former characteristic polynomial is $X-4$ with root $4$, while the characteristic polynomial of $D_2$ is a quadratic polynomial with the sum of its roots being (the trace) $8$. Now $4$ is clearly not a root of the latter characteristic polynomial (as $D_2-4I$ is invertible since $a\neq0$), an consequently $D_2$ cannot have multiple roots over $\Bbb C$, so the matrix is diagonalisable over $\Bbb C$.</p>
<p>Finally, since the question was about being diagonalisable over$~\Bbb R$, one does have to know the discriminant of the characteristic polynomial of $D_2$. Here I do have to admit some computation: the discriminant is $64-4\times\det D_2 =16 a$, so the eigenvalues of $D_2$ are real (and the original matrix diagonalisable over$~\Bbb R$) if and only if $a>0$.</p>
| 52
|
linear-algebra
|
$\mathbb{R}_{\le3}[X]$ is not a subspace of $\mathbb{R}_{\le4}[X]$ (polynomials in linear algebra)
|
https://math.stackexchange.com/questions/1904387/mathbbr-le3x-is-not-a-subspace-of-mathbbr-le4x-polynomials
|
<p>I'm sorry that this is probably a stupid question for this page, but I have no one to ask. I'm currently studying linear algebra by myself and I'm confused by this answer:</p>
<blockquote>
<p><span class="math-container">$V$</span> is not a subspace of <span class="math-container">$\mathbb{R}_{\le4}[X]$</span>, because <span class="math-container">$V$</span> is not a subset of <span class="math-container">$\mathbb{R}_{\le4}[X]$</span>,</p>
<p>where <span class="math-container">$V := \{ax^3 + bx^2+ cx + d \in \mathbb{R}_{\le3}[X] \quad|\quad b = 0\}$</span></p>
</blockquote>
<p>I understand that if it's not a subset it can't be a subspace, but why isn't it a subset?</p>
<p>Thank you in advance.</p>
|
<p>If, as lisyarus asked, $\mathbb R_{\le n}[x]$ represents polynomials of at most degree $n$ then I somewhat disagree with that solution (and why subscripts as powers??). But here is why I think they <em>might</em> claim it. As vector spaces,</p>
<p>$$\mathbb R_{\le 4}[x] \simeq \mathbb R^5$$</p>
<p>Meaning they are isomorphic, or essentially the same space. The polynomial (as a vector) $ax^4+bx^3+cx^2+dx+e$ is "essentially the same" as the vector $(a,b,c,d,e)\in \mathbb R^5$. Since there is no mentioned coefficient of $x^4$ I assume they mean it is a fundamentally different thing, just as $(a,b,c) \notin \{(a,b,c,d) \ | a,b,c,d\in\mathbb{R}\}$</p>
<p>However, I interpret $\mathbb{R}_{\le 3}[x]$ as a subspace of $\mathbb{R}_{\le 4}[x]$ in the following sense $$\{ax^4+bx^3+cx^2+dx+e \ |\ a=0, b,c,d,e\in\mathbb R\}$$</p>
<p>And similarly for $\mathbb R_{\le n}[x]$ for any $n\ge 3$.</p>
<p>Second, since</p>
<p>$$\{ax^3+bx^2+cx+d \ |\ b=0\}$$</p>
<p>Is certainly a subspace of $\mathbb R_{\le 3}[x]$, I would also say its a subset of $\mathbb R_{\le 4}[x]$, however my guess at their interpretation from before still stands. But without your explicit definitions and lecture materials I cannot guess any better.</p>
| 53
|
linear-algebra
|
Parallel vectors and rank of matrix
|
https://math.stackexchange.com/questions/2129639/parallel-vectors-and-rank-of-matrix
|
<blockquote>
<p>Suppose <span class="math-container">$v_1, v_2, v_3$</span> are (row) vectors in <span class="math-container">$\mathbb{R}^3$</span>, and they are parallel, then what you can say about the rank of the matrix:</p>
<p><span class="math-container">\begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix}</span></p>
</blockquote>
<p><strong>Note:</strong> So this is a <span class="math-container">$3 \times 3$</span> matrix with rows <span class="math-container">$v_1, v_2, v_3$</span>.</p>
<p>The book points out <span class="math-container">$\text{rank } > 1$</span>, but why is this true?</p>
|
<p>If the vectors are parallel (that is, each $v_i$ is a constant multiple of each $v_j$) then the rank of the matrix is actually $\leq 1$ because the dimension of the row space (the span of the rows) is $\leq 1$. If some $v_i$ is non-zero then the rank will be one. If $v_1 = v_2 = v_3 = 0$ then the rank will be zero.</p>
| 54
|
linear-algebra
|
Rank of matrix containing NON parallel vectors
|
https://math.stackexchange.com/questions/2131053/rank-of-matrix-containing-non-parallel-vectors
|
<blockquote>
<blockquote>
<p>Suppose <span class="math-container">$v_1, v_2, v_3$</span> are (row) vectors in <span class="math-container">$\mathbb{R}^3$</span>, and they are <strong>not</strong> parallel, then what you can say about the rank of the matrix:</p>
</blockquote>
<p><span class="math-container">\begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix}</span></p>
</blockquote>
<p>The answer says <span class="math-container">$\text{rank } A > 1$</span>, where <span class="math-container">$A$</span> is the matrix.</p>
<p>But why is this true?</p>
<p><strong>From the definition of rank</strong></p>
<blockquote>
<p><strong>Rank:</strong> <em>The maximum number of linearly independent row vectors in the matrix. Both definitions are equivalent.</em></p>
</blockquote>
<p>Clearly, if <span class="math-container">$v_1, v_2, v_3$</span> are not parallel, then the set <span class="math-container">$\{v_1, v_2, v_3\}$</span> is a linearly independent set, so <span class="math-container">$\text{rank }A = 3$</span> should hold shouldn't it?</p>
| 55
|
|
linear-algebra
|
Dot product of projection and vector?
|
https://math.stackexchange.com/questions/2131202/dot-product-of-projection-and-vector
|
<blockquote>
<p>Suppose <span class="math-container">$P$</span> is a plane and <span class="math-container">$x$</span> is a vector (both in <span class="math-container">$\mathbb{R^3}$</span>), can we say that</p>
<p><span class="math-container">$$x \cdot \text{proj} _{P}x = 0$$</span></p>
</blockquote>
<p>For the dot product, must it always be <span class="math-container">$0$</span>?</p>
|
<p>Think about it. If p is some non-zero vector, the projection of u onto p is usually given by $$\frac{u\cdot{p}}{|p|^2}p$$ then,
$$
u\cdot{\frac{u\cdot{p}}{|p|^2}p}
$$
But we can see that $\frac{u\cdot{p}}{|p|^2}$is just a scalar that we can factor out of the dot product, thus we are left with
$$
\frac{u\cdot{p}}{|p|^2}u\cdot{p}=\frac{(u\cdot{p})^2}{|p|^2}
$$
Which is zero if and only if $u\perp p$.</p>
| 56
|
linear-algebra
|
A question about a proof that has to do with diagonal matrices
|
https://math.stackexchange.com/questions/2139992/a-question-about-a-proof-that-has-to-do-with-diagonal-matrices
|
<blockquote>
<p>Show that if <span class="math-container">$A$</span> is a diagonal matrix then orthogonal diagonalising matrix <span class="math-container">$Q = \text {Identity}.$</span></p>
<p>Proof: Let <span class="math-container">$A$</span> be a diagonal matrix and if <span class="math-container">$Q = I,$</span> then <span class="math-container">$Q^{-1}AQ = I^{-1}AI = IAI = A.$</span> Therefore the identity matrix <span class="math-container">$I$</span> orthogonally diagonalizes <span class="math-container">$A$</span>. Thus our orthogonal matrix <span class="math-container">$Q = I$</span>.</p>
</blockquote>
<p>I don't get the proof here. It looks like they have proved any matrix is orthogonally similar to itself. Can someone elaborate? Maybe I am missing something important?</p>
|
<p>A diagonalizable matrix $A$ is a matrix such that for invertible matrix $P$ and $D$ diagonal you have $A=PDP^{-1}$. Now for $A$ diagonal you can take $D=A$ and $P=I$ and this proves that $A$ is diagonalizable (as trivially expected).</p>
<p>Moreover orthogonally diagonalizable is when the matrix $P$ is orthogonal. For diagonal matrices $A$ the matrix of change of basis $P$ is the identity since $A$ is already in diagonal form. So it says that a diagonal matrix is orthogonally diagonalizable since $Q=I$ satisfies $A=QDQ^{-1}$ with $D=A$ diagonal and $Q$ orthogonal.</p>
| 57
|
linear-algebra
|
Structure of a mapping comes from the Codomain?
|
https://math.stackexchange.com/questions/2014698/structure-of-a-mapping-comes-from-the-codomain
|
<blockquote>
<p>Show that:</p>
<p>If <span class="math-container">$A$</span> is a non empty set and <span class="math-container">$R$</span> a ring, then <span class="math-container">$\operatorname{map}(A,R)$</span>, is a ring too, with the following operations:</p>
<p><span class="math-container">$f+g$</span> is defined by: <span class="math-container">$(f+g)(x):=f(x)+g(x)$</span> for all <span class="math-container">$x\in A$</span></p>
<p><span class="math-container">$f\cdot g$</span> is defined by: <span class="math-container">$(f\cdot g)(x):= f(x)\cdot g(x)$</span> for all <span class="math-container">$x\in A$</span></p>
</blockquote>
<p>I am stuck at the very first step, showing that <span class="math-container">$f(x)+g(x)$</span> is an element of <span class="math-container">$\operatorname{map}(A,R)$</span>.</p>
<p>Could you give me the tools, apart from the definitions, to prove this on my own?</p>
<p>Also is there a more elegant way than "axiom"-checking?</p>
<p><span class="math-container">$\operatorname{map}(A,R)$</span> is the set of all functions from <span class="math-container">$A$</span> to <span class="math-container">$R$</span>.</p>
|
<p>It is not $f(x)+g(x)$, but $h=f+g$, with $h:A\rightarrow R$ such as $h(x)=f(x)+g(x)\in R$ by the properties of the ring for $x\in A$.</p>
| 58
|
linear-algebra
|
Prove that $X\oplus Y$ is a Dedekind cut
|
https://math.stackexchange.com/questions/2021755/prove-that-x-oplus-y-is-a-dedekind-cut
|
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be a Dedekind cut. Now let <span class="math-container">$X\oplus Y=\{x+y\mid x\in X,y\in Y\}$</span>. Show that <span class="math-container">$X\oplus Y$</span> is again a Dedekind cut. i.e. it must fulfill the following conditions:</p>
<blockquote>
<p>i) <span class="math-container">$X\oplus Y\not=\emptyset\;$</span> and <span class="math-container">$\mathbb{Q}\setminus\left(X\oplus Y\right)\not=\emptyset$</span></p>
<p>ii) Let <span class="math-container">$x\in X\oplus Y$</span> and <span class="math-container">$r\in\mathbb{Q}$</span> such that <span class="math-container">$r<x$</span>, then <span class="math-container">$r\in X\oplus Y\;$</span> must follow.</p>
<p>iii) <span class="math-container">$X\oplus Y$</span> has no maximum.</p>
</blockquote>
<hr />
<p>I was able to prove <span class="math-container">$X\oplus Y\not=\emptyset$</span> by contradiction, however, I'm kind of stuck with anything else. For example, if I try to contradict <span class="math-container">$\mathbb{Q}\setminus\left(X\oplus Y\right)\not=\emptyset$</span> I end up with this:</p>
<p>Proof by contradiction. Let <span class="math-container">$\mathbb{Q}\setminus\left(X\oplus Y\right)=\emptyset$</span>.
<span class="math-container">$\Rightarrow\mathbb{Q}\setminus\{x+y\mid x\in X,y\in Y\}=\emptyset\\\Rightarrow\{x+y\mid x\in X,y\in Y\}=\mathbb{Q}$</span></p>
<p>To prove equality, we must show <span class="math-container">$\mathbb{Q}\subseteq \left(X\oplus Y\right)$</span> and <span class="math-container">$\left(X\oplus Y\right)\subseteq \mathbb{Q}$</span>.</p>
<ol>
<li><span class="math-container">$\left(X\oplus Y\right)\subseteq \mathbb{Q}$</span>: We know <span class="math-container">$X\subset Q$</span> and <span class="math-container">$Y\subset Q$</span>. (what now??)</li>
</ol>
|
<p>HINT: To show that $\Bbb Q\setminus(X\oplus Y)\ne\varnothing$, show that there are $r\in\Bbb Q\setminus X$ and $s\in\Bbb Q\setminus Y$ such that $r,s\ge 0$, and then show that $x+y<r+s$ for each $x\in X$ and $y\in Y$.</p>
<p>For (ii), suppose that $r\in\Bbb Q$ and $r<z\in X\oplus Y$. By definition there are $x\in X$ and $y\in Y$ such that $z=x+y$; let $d=z-r$.</p>
<ul>
<li>Is $d\in\Bbb Q$? Is $y-d\in Y$? How can you make use of this to show that $r\in X\oplus Y$?</li>
</ul>
<p>For (iii), suppose that $X\oplus Y$ has a maximum element $z$; there are $x\in X$ and $y\in Y$ such that $z=x+y$. Get a contradiction by showing that $x$ must be the maximum element of $X$.</p>
| 59
|
linear-algebra
|
TRUE or FALSE? Eliminating z from x + 2y + 3z = 2, 3x + 2y + 3z = 6 and 2x + 3y = 5 gives x + 2y = 2 .
|
https://math.stackexchange.com/questions/2202276/true-or-false-eliminating-z-from-x-2y-3z-2-3x-2y-3z-6-and-2x-3y
|
<blockquote>
<p>Is the following statement true or false?</p>
<p>Eliminating <span class="math-container">$z$</span> from:<br />
<span class="math-container">$x + 2y + 3z = 2$</span>,<br />
<span class="math-container">$3x + 2y + 3z = 6$</span> and,<br />
<span class="math-container">$2x + 3y = 5$</span>;<br />
gives <span class="math-container">$x + 2y = 2$</span>.</p>
</blockquote>
<p>I just subtracted first equation from the second which gives <span class="math-container">$x = 2$</span>, I don't know how to proceed thereafter. Please help. Thanks.</p>
|
<p>As you found value of $x$ after that,</p>
<p>$2x + 3y = 5$</p>
<p>Put value of $x=2$ in above equation,</p>
<p>$4+3y = 5$</p>
<p>$3y = 1$</p>
<p>$y=\frac 13$</p>
<p>Then you can put value of $x, y$ in resultant equation $x+2y=2$. Value of $x, y$ doesn't satisfy. So answer is false.</p>
| 60
|
linear-algebra
|
To find factor of a polynomial equation
|
https://math.stackexchange.com/questions/2205323/to-find-factor-of-a-polynomial-equation
|
<blockquote>
<p>One of the factors of <span class="math-container">$4x^2+y^2+14x-7y-4xy+12$</span> is equal to</p>
<ol>
<li><p><span class="math-container">$2x-y+4$</span></p>
</li>
<li><p><span class="math-container">$2x-y-3$</span></p>
</li>
<li><p><span class="math-container">$2x+y-4$</span></p>
</li>
<li><p><span class="math-container">$2x-y+3$</span></p>
</li>
</ol>
</blockquote>
<p>Step <span class="math-container">$1$</span>:
<span class="math-container">$4x^2+y^2-4xy$</span> can be simplified as <span class="math-container">$(2x-y)^2$</span></p>
<p>Step <span class="math-container">$2$</span>:
<span class="math-container">$14x-7y$</span> can be simplified as <span class="math-container">$7(2x-y)$</span></p>
<p>and finally</p>
<p><span class="math-container">$(2x-y) (2x-y+7) + 12$</span></p>
<p>I can able to factor to this extent only. however can't able to arrive at the answer.</p>
<p>The answer is given in the book. it states that <span class="math-container">$4x^2+y^2+14x-7y-4xy+12$</span> is product of <span class="math-container">$(2x-y+3)$</span> and <span class="math-container">$(2x-y+4)$</span> I am in need of steps</p>
|
<p>You can do $2x-y=k$ and then</p>
<p>$$k^2+7k+12=(k+3)(k+4)$$</p>
<p>and then you get</p>
<p>$$(2x-y+3)(2x-y+4)$$</p>
| 61
|
linear-algebra
|
How do I show that an equation has a solution orthogonal to the nullspace?
|
https://math.stackexchange.com/questions/2041229/how-do-i-show-that-an-equation-has-a-solution-orthogonal-to-the-nullspace
|
<p>This was a question on a recent linear algebra midterm, and I had no idea where to start.</p>
<blockquote>
<p>Fix an <span class="math-container">$m\times n$</span> matrix <span class="math-container">$A$</span> and a column vector <span class="math-container">$\mathbf{b}$</span> of size <span class="math-container">$m$</span>. Assume that <span class="math-container">$A\mathbf{x}=\mathbf{b}$</span> is consistent. Show that <span class="math-container">$A\mathbf{x}=\mathbf{b}$</span> has a solution <span class="math-container">$\mathbf{x}_0$</span> that is orthogonal to the nullspace <span class="math-container">$\mathbf{N}(A)$</span>.</p>
<p>Hint: start with any solution and modify it to get one orthogonal to <span class="math-container">$\mathbf{N}(A)$</span>.</p>
</blockquote>
<p>I thought that I should be doing something with the row space of the matrix since it's perpendicular to the nullspace, but I didn't know where to go from there, so I'm not sure if that was the right way to go. Can anyone point me in the right direction?</p>
|
<p>Start with a solution $x_0$, then write $x_0$ as
$$
x_0 = \hat{x_0} + z,
$$
where $\hat{x_0} = \mathrm{Proj}_{Nul A} (x_0)$ is the orthogonal projection of $x_0$ to $Nul A$. </p>
<p>Then for any $y\in Nul A$,
$$
y \cdot z = 0. $$
The vector $z$ is the solution you want. </p>
| 62
|
linear-algebra
|
Hermitian operators $\langle Av,v\rangle=0$ for all $v\in V$ then $A=0$ proof
|
https://math.stackexchange.com/questions/2383614/hermitian-operators-langle-av-v-rangle-0-for-all-v-in-v-then-a-0-proof
|
<blockquote>
<p>Theorem: Let <span class="math-container">$V$</span> be as before. If <span class="math-container">$A$</span> is an operator such that <span class="math-container">$\langle Av,v\rangle=0$</span> for all <span class="math-container">$v\in V$</span> then <span class="math-container">$A=0$</span>.</p>
</blockquote>
<p>Proof: The left hand side of the polarization identity is equal to <span class="math-container">$0$</span> for all <span class="math-container">$v,w\in V$</span>. Hence we obtain</p>
<p><span class="math-container">$\langle Aw,v\rangle+\langle Av,w\rangle=0$</span></p>
<p>for all <span class="math-container">$v,w\in V$</span>.Replace <span class="math-container">$v$</span> by <span class="math-container">$iv$</span>. Then by the rules for the hermitian product, we obtain</p>
<p><span class="math-container">$-i\langle Aw,v\rangle+i\langle Av,w\rangle=0$</span></p>
<p>whence</p>
<p><span class="math-container">$-\langle Aw,v\rangle+\langle Av,w\rangle=0$</span></p>
<p>Adding this to the first relation obtained above yields</p>
<p><span class="math-container">$2\langle Av,w\rangle=0$</span></p>
<p>whence <span class="math-container">$\langle Av,w\rangle=0$</span>. Hence <span class="math-container">$A=0$</span>,as was to be shown.<span class="math-container">$\blacksquare $</span> <em><strong>Linear Algebra, by Serge Lang.</strong></em></p>
<blockquote>
<p>Polarization identity:</p>
<p><span class="math-container">$\langle A(v+w),v+w\rangle-\langle A(v-w),v-w\rangle=2[\langle Av,w\rangle+\langle Aw,v\rangle]$</span></p>
<p>for all <span class="math-container">$v,w\in V$</span>, or also</p>
<p><span class="math-container">$\langle A(v+w),v+w\rangle- \langle A(v),v\rangle-\langle A(w),w\rangle=\langle Av,w\rangle+\langle Aw,v\rangle$</span></p>
</blockquote>
<p>I am not understanding this proof.</p>
<p><strong>Questions:</strong></p>
<p><strong>1)</strong> <span class="math-container">$\langle Av,v\rangle=0$</span> Why is not implicit that <span class="math-container">$A=0$</span>? Why is not straightforward that <span class="math-container">$\langle Av,w\rangle=0$</span>?</p>
<p><strong>2)</strong> "Adding this to the first relation obtained above yields <span class="math-container">$2\langle Av,w\rangle=0$</span>" How does the author delivers this conclusion? What relation is the author referring to?</p>
<p>Thanks in advance!</p>
|
<p>(1) In matrix terms, and working
over the reals, we have the hypothesis $v^t A v=0$ for all vectors
$v$. This does <strong>not</strong> imply $v^t Aw=0$ for all $v$ and $w$. Consider
$\pmatrix{0&1\\-1&0}$. To get this implication we need symmetry. Here the
analogue of symmetry is the Hermitian condition.</p>
<p>(2)
$$2\left<Av,w\right>
=(\left<Aw,v\right>+\left<Av,w\right>)
+(-\left<Aw,v\right>+\left<Av,w\right>).$$</p>
| 63
|
linear-algebra
|
Direct Sum Of Two Subspace Of $\mathbb{R}^{2\times 2}$
|
https://math.stackexchange.com/questions/2567858/direct-sum-of-two-subspace-of-mathbbr2-times-2
|
<blockquote>
<p>Let <span class="math-container">$V=\mathbb{R}^{2\times 2}$</span> and define the subspaces</p>
<p><span class="math-container">$$U=\left\{\begin{pmatrix} a&0\\ 0&d \end{pmatrix}: a,d\in \mathbb{R}\right\}$$</span></p>
<p><span class="math-container">$$W=\left\{\begin{pmatrix} a&b\\ c&d \end{pmatrix}: a+c=0\:{\rm and}\:b+d=0\right\}$$</span></p>
<p>Prove that <span class="math-container">$V=U\oplus W$</span></p>
</blockquote>
<p>I got everything covered it boil down to finding a matrix <span class="math-container">$A\in V$</span> such that <span class="math-container">$A=U+W$</span> but I can not find one, it tried be</p>
<p><span class="math-container">$$\begin{pmatrix} 0&0\\ 0&0 \end{pmatrix}+\begin{pmatrix} a&b\\ -a&-b \end{pmatrix}=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$$</span></p>
|
<p>I'm not sure what you mean with "everything covered" - what exactly have you done?</p>
<p>Note that an element of $W$ is of the form (using $a+c=0$ and $b+d=0$):
$$\begin{pmatrix} -c &b\\ c& -b \end{pmatrix}$$
So a matrix in $U+W$ has the form:
$$\begin{pmatrix} a-c &b\\ c& d-b \end{pmatrix}$$
From this it should be clear that:
$$\color{blue}{\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}}=\begin{pmatrix} a-c &b\\ c& d-b \end{pmatrix}$$
where the blue matrix is an arbitrary element of $V$, always has a <em>unique solution</em> for $a,b,c,d$.</p>
| 64
|
linear-algebra
|
Intersection of two polynomial subspaces
|
https://math.stackexchange.com/questions/2570880/intersection-of-two-polynomial-subspaces
|
<p>I'm working through the following problem.</p>
<blockquote>
<p>Let <span class="math-container">$U = \{ p \in \mathbb{P}_3 : p(1) = 0 \}$</span> and <span class="math-container">$V = \{ p \in \mathbb{P}_3 : p(-1) = 0 \}$</span>. Here, <span class="math-container">$\mathbb{P}_3$</span> represents the space of polynomials of at most degree 3.</p>
<p>What are the dimension of <span class="math-container">$U$</span> and <span class="math-container">$V$</span> respectively?</p>
<p>Determine a basis for the subspace <span class="math-container">$U \cap V$</span>.</p>
<p>Determine <span class="math-container">$U + V$</span>.</p>
</blockquote>
<p>How does one approach this problem? I chose the basis for <span class="math-container">$\mathbb{P}_3$</span>, <span class="math-container">$\beta = \{x^3, x^2, x, 1 \}$</span>. My instinct is <span class="math-container">$dim(U) = 3$</span> and <span class="math-container">$dim(V) = 3$</span> because looking at <span class="math-container">$\beta$</span>, the constant polynomial basis vector will never have a non-zero coefficient on its own in <span class="math-container">$U$</span> or <span class="math-container">$V$</span>. But how do I show this?</p>
|
<p>Any polynomial in $\mathbb{P}_3$ is of the form $ax^3+bx^2+cx+d$ where $a,b,c,d\in\mathbb{R}$. If $p(1)=0$, then $a+b+c+d=0$, so we can write $d=-a-b-c$, and hence polynomials in $U$ are of the form $ax^3+bx^2+cx-a-b-c=a(x^3-1)+b(x^2-1)+c(x-1)$. The dimension of $U$ is therefore $3$, since $\{x^3-1,x^2-1,x-1\}$ is a basis for $U$. Showing $\dim V=3$ is similar. </p>
| 65
|
linear-algebra
|
Is $V$ a vector space over $R$ under these two operations?
|
https://math.stackexchange.com/questions/2097193/is-v-a-vector-space-over-r-under-these-two-operations
|
<p>Let <span class="math-container">$V=\{(a,b) : (a,b) \in \mathbb{R}\}$</span></p>
<p>Is <span class="math-container">$V$</span> a vector space over <span class="math-container">$\mathbb{R}$</span> under:</p>
<p><strong>Addition</strong>: <span class="math-container">$(a_1,a_2)+(b_1,b_2)=(a_1+b_2,a_2+b_1)$</span></p>
<p><span class="math-container">$(a_1,b_1,a_2,b_2 \in\mathbb{R})$</span></p>
<p><strong>Scalar multiplication</strong>: <span class="math-container">$t(a,b)=(ta,tb)$</span></p>
<p><span class="math-container">$(t,a,b \in\mathbb{R})$</span></p>
<p>There exists an element <span class="math-container">$0$</span> so i think it's a vector space.</p>
<p>Am i right?</p>
| 66
|
|
linear-algebra
|
How to find The Roots of Orthogonal polynomial equation
|
https://math.stackexchange.com/questions/2102294/how-to-find-the-roots-of-orthogonal-polynomial-equation
|
<blockquote>
<p><span class="math-container">$P(z)$</span>, with roots <span class="math-container">$z_j$</span>'s for <span class="math-container">$0\leq j\leq a-1$</span>.</p>
<p><span class="math-container">$$P(z)=z^a+c_{a-1}z^{a-1}+\ldots+c_1z+c_0.$$</span></p>
</blockquote>
<p>I want to find the Roots of orthogonal polynomial equation but I don't know which method should I used.</p>
| 67
|
|
linear-algebra
|
Prove that the union of two bases in different subspaces is a basis for vector space
|
https://math.stackexchange.com/questions/2585101/prove-that-the-union-of-two-bases-in-different-subspaces-is-a-basis-for-vector-s
|
<p>This is a question from Finite-Dimensional Linear Algebra by Mark S. Gockenbach page 72 (Exercise 2.7.14). I hope to check my proof. Thank you.</p>
<blockquote>
<p>Let <span class="math-container">$V$</span> be an <span class="math-container">$n$</span>-dimensional vector space over a field <span class="math-container">$F$</span>, and suppose <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are subspaces of <span class="math-container">$V$</span> satisfying <span class="math-container">$S \cap T = \{0\}$</span>. Suppose that <span class="math-container">$\{s_1,...,s_k \}$</span> is a basis for <span class="math-container">$S$</span>, <span class="math-container">$\{t_1,...,t_k \}$</span> is a basis for <span class="math-container">$T$</span>, and <span class="math-container">$k+l = n$</span>.</p>
<p>Prove that <span class="math-container">$\{s_1,...,s_k,t_1,...,t_l \}$</span> is a basis for <span class="math-container">$V$</span>.</p>
</blockquote>
<p>My proof :</p>
<p>Suppose that <span class="math-container">$\alpha_1 s_1 +...+ \alpha_k s_k + \alpha_{k+1}t_1 +...+ \alpha_{n}t_l = 0$</span> for some <span class="math-container">$\{ \alpha_i\}_{i=1}^{n} \subset F$</span></p>
<p>Then <span class="math-container">$\alpha_{k+1}t_1 + ... + \alpha_{n} t_l = -(\alpha_1 s_1 + ... + \alpha_k s_k) \in \text{span} \{ s_1,...,s_k \}= S$</span>.</p>
<p>Also, <span class="math-container">$\alpha_{k+1}t_1 + ... + \alpha_{n} t_l \in \text{span} \{ t_1,...,t_l \} = T $</span>.</p>
<p>By <a href="https://math.stackexchange.com/questions/2583902/the-union-of-two-linearly-independent-subsets?noredirect=1#comment5333618_2583902">Exercise 2.5.15</a>, <span class="math-container">$\{s_1,...,s_k,t_1,...,t_l \}$</span> is a linearly independent set with <span class="math-container">$n$</span> vectors.</p>
<p>Since <span class="math-container">$\text{dim}(V)=n$</span>, any set of <span class="math-container">$n+1$</span> vectors is linearly dependent.</p>
<p>If there exists <span class="math-container">$v \notin \text{span} \{s_1,...,s_k,t_1,...,t_l \} $</span>, then we could add that vector <span class="math-container">$v$</span> to the set to obtain a set of <span class="math-container">$n+1$</span> linearly independent vectors.</p>
<p>However, since <span class="math-container">$\text{dim}(V)=n$</span>, any set of <span class="math-container">$n+1$</span> vectors is linearly dependent.</p>
<p>This shows that every <span class="math-container">$v \in V$</span> belongs to <span class="math-container">$\text{span} \{s_1,...,s_k,t_1,...,t_l \} $</span>.</p>
<p>Therefore, <span class="math-container">$\text{span} \{s_1,...,s_k,t_1,...,t_l \} = V$</span> and is therefore a basis for <span class="math-container">$V$</span>. <span class="math-container">$\blacksquare$</span></p>
|
<p>Since the vectors are in bases of different subspaces, they are non-zero and linearly independent. Therefore, their union is also a basis for the span of the new set.</p>
| 68
|
linear-algebra
|
Nontrivial solution for Ax=0 and Ax=b determine by pivot positions
|
https://math.stackexchange.com/questions/2230419/nontrivial-solution-for-ax-0-and-ax-b-determine-by-pivot-positions
|
<blockquote>
<p>A is a 3x2 matrix with two pivot positions.</p>
<p>(a) does the equation Ax=0 have a nontrivial solution</p>
</blockquote>
<p>Since the two pivot positions will create 0 in the entire column in which they are present and 1 in its own position in reduced row echelon form and the rightmost column is all 0 therefore Ax=0 has no nontrivial solution</p>
<blockquote>
<p>(b) does the equation Ax=b have atleast one solution for every possible b?</p>
</blockquote>
<p>In the reduced row form b should have a [* * 0] form then only a unique non trivial solution exists</p>
<p>Is this correct and does it sound mathematical?</p>
|
<p>Your answer to (a) looks good. Question (b) can be asked alternately as $``$Can $\mathbb{R}^3$ be spanned by only two vectors in $\mathbb{R}^3$$"$? </p>
| 69
|
linear-algebra
|
$\langle Av_1,Av_2\rangle=ac\langle v_1,v_1\rangle+bd\langle v_2,v_2\rangle$?
|
https://math.stackexchange.com/questions/2398483/langle-av-1-av-2-rangle-ac-langle-v-1-v-1-ranglebd-langle-v-2-v-2-rangle
|
<blockquote>
<p>Define a rotation of <span class="math-container">$V$</span> to be a real unitary map <span class="math-container">$A$</span> of <span class="math-container">$V$</span> whose determinant is 1. Show that the matrix of <span class="math-container">$A$</span> relative to an orthogonal basis of <span class="math-container">$V$</span> is of type</p>
<p><span class="math-container">$\begin{bmatrix}a&-b\\b&a\end{bmatrix}$</span></p>
<p>for some real numbers <span class="math-container">$a,b$</span> such that <span class="math-container">$a^2+b^2=1$</span>.</p>
</blockquote>
<p>SOLUTION. Let <span class="math-container">$\{v_1,v_2\}$</span> be an orthogonal basis for <span class="math-container">$V$</span>. Let <span class="math-container">$w_i=Av_i$</span> and</p>
<p><span class="math-container">$w_1=av_1+bv_2$</span></p>
<p><span class="math-container">$w_2=cv_1+dv_2$</span></p>
<p>The matrix representing <span class="math-container">$V$</span> in the chosen basis is</p>
<p><span class="math-container">$\begin{bmatrix}a&c\\b&d\end{bmatrix}$</span>.</p>
<p>Then, since <span class="math-container">$\langle Av_i,Av_i\rangle=\langle v_i,v_i\rangle$</span> we have</p>
<p><span class="math-container">$(a^2-1)\langle v_1,v_1\rangle + b^2\langle v_2,v_2\rangle=0$</span></p>
<p><span class="math-container">$(c^2)\langle v_1,v_1\rangle + (d^2-1)\langle v_2,v_2\rangle=0$</span></p>
<p>But <span class="math-container">$dw_1-bw_2=(ad-bc)v_1=v_1$</span>,so</p>
<p><span class="math-container">$\langle v_1,v_1\rangle=\langle A(dv_1-dv_2),A(dv_1-dv_2)\rangle=d^2\langle v_1,v_1\rangle + b^2\langle v_2,v_2\rangle$</span>,</p>
<p>thus implies <span class="math-container">$a^2=d^2$</span> and <span class="math-container">$b^2=c^2$</span>. Moreover,</p>
<p><span class="math-container">$0=\langle v_1,v_2\rangle=\langle Av_1,Av_2\rangle=ac\langle v_1,v_1\rangle+bd\langle v_2,v_2\rangle$</span>,</p>
<p>so <span class="math-container">$ac$</span> and <span class="math-container">$bd$</span> are of opposite signs and therefore the matrix <span class="math-container">$A$</span> has the desired form.<em><strong>Solutions Manual for Lang´s Linear Algebra, Rami Sharcharchi</strong></em></p>
<p>I have spent a lot of time on this exercise and for some reason it does not look intuitive to me.I have a bounty on another question on this exercise, if you want to check out here is the link <a href="https://math.stackexchange.com/questions/2392619/orthogonal-basis-transformation-matrix-type">Orthogonal basis transformation matrix type</a></p>
<p><strong>Question</strong>:</p>
<p>How can I derive this expression <span class="math-container">$\langle Av_1,Av_2\rangle=ac\langle v_1,v_1\rangle+bd\langle v_2,v_2\rangle$</span>? What step did the author give? I tried to multiply the matrix <span class="math-container">$A^tA$</span> by the basis <span class="math-container">$\{v_1,v_2\}$</span> and unsuccessfully I got nothing that looks like the expression given.</p>
<p>Thanks in advance</p>
|
<p>Note that the scalar product is bilinear and:
$$\langle Av_1,Av_2\rangle =\langle w_1,w_2 \rangle =\langle av_1+bv_2,cv_1+dv_2 \rangle =\langle av_1,cv_1+dv_2\rangle+\langle bv_2,cv_1+dv_2\rangle = ac\langle v_1,v_1\rangle +ad\langle v_1,v_2\rangle+bc\langle v_2,v_1\rangle +bd\langle v_2,v_2\rangle.$$
Since $\langle v_1,v_2\rangle=\langle v_2,v_1\rangle=0$, the result follows.</p>
| 70
|
linear-algebra
|
Determinant of a symmetric matrix a quadratic form proof
|
https://math.stackexchange.com/questions/2401585/determinant-of-a-symmetric-matrix-a-quadratic-form-proof
|
<blockquote>
<p>Let <span class="math-container">$V$</span> be the vector space over <span class="math-container">$\mathbb{R}$</span> of <span class="math-container">$2\times 2$</span> real symmetric matrices.
Show that the function <span class="math-container">$f$</span> on <span class="math-container">$V$</span> such that <span class="math-container">$f(A)=\det(A)$</span> is a quadratic form.</p>
</blockquote>
<p>Itried to solve this question in the following way:</p>
<p>A quadratic form is defined as <span class="math-container">$g(v,v)=f(v)$</span> in which <span class="math-container">$g$</span> is a bilinear form.
<span class="math-container">$f:V\to \mathbb{R}$</span></p>
<p>I tried to use the bilinear form definition:</p>
<blockquote>
<p>Let K be a field and <span class="math-container">$V,W$</span> vector spaces over <span class="math-container">$K$</span>. A map <span class="math-container">$g:V\times W\to K$</span> is said to be bilinear if it satisfies the following properties:</p>
<p>BI 1. For all <span class="math-container">$v_1,v_2\in V$</span> and <span class="math-container">$w\in W$</span> we have</p>
<p><span class="math-container">$g(v_1+v_2,w)=g(v_1,w)+g(v_2,w)$</span></p>
<p>and for all <span class="math-container">$v\in V$</span> and <span class="math-container">$w_1,w_2\in W$</span> we have</p>
<p><span class="math-container">$g(v,w_1+w_2)=g(v,w_1)+g(v,w_2)$</span></p>
<p>BI 2. For all <span class="math-container">$c\in K$</span>,<span class="math-container">$v\in V$</span> and <span class="math-container">$w\in W$</span>,</p>
<p><span class="math-container">$g(cv,w)=cg(v,w)=g(v,cw)$</span>.</p>
</blockquote>
<p>Taking the example <span class="math-container">$A=\begin{bmatrix}x&y\\y&z\end{bmatrix}$</span></p>
<p><span class="math-container">$f(cA)=cxcz-cycy=c^2(xz-y^2)=c^2f(A)\neq cf(A)$</span></p>
<p><strong>Question</strong>:</p>
<p><strong>1)</strong> What is wrong with my attempt of proof?</p>
<p><strong>2)</strong> How can I prove <span class="math-container">$\det(A)$</span> is a quadratic form?</p>
|
<p>If you have a quadratic form over the real numbers, in finite dimension, the matrix is <strong>half the Hessian matrix of second partial derivatives</strong>. This time</p>
<p>$$
\left(
\begin{array}{rrr}
0 & 0 & \frac{1}{2} \\
0 & -1 & 0 \\
\frac{1}{2} & 0 & 0
\end{array}
\right)
$$</p>
<p>You should (carefully) do the multiplication</p>
<p>$$
\left(
\begin{array}{rrr}
x & y & z \\
\end{array}
\right)
\left(
\begin{array}{rrr}
0 & 0 & \frac{1}{2} \\
0 & -1 & 0 \\
\frac{1}{2} & 0 & 0
\end{array}
\right)
\left(
\begin{array}{r}
x \\ y \\ z
\end{array}
\right)
$$</p>
| 71
|
linear-algebra
|
Showing there is a unique basis $\{p_1, p_2, p_3\}$ of $P_2(R)$ with certain properties
|
https://math.stackexchange.com/questions/1949874/showing-there-is-a-unique-basis-p-1-p-2-p-3-of-p-2r-with-certain-pro
|
<blockquote>
<p><span class="math-container">$P_2(R)$</span> is the set of polynomials of degree two or lower.</p>
<p>Show that there is a unique basis <span class="math-container">$\{p_1, p_2, p_3\}$</span> of <span class="math-container">$P_2(R)$</span> with the property that <span class="math-container">$p_1(0) = 1, p_1(1) = p_1(2) = 0, p_2(1) = 1, p_2(0) = p_2(2) = 0$</span> and <span class="math-container">$p_3(2) = 1, p_3(0) = p_3(1) = 0$</span>.</p>
</blockquote>
<p>Stuck here. I know a proof might look something like "Suppose there were two different bases..." and then showing they must be the same, but I'm actually stuck on part of finding what one basis would be in the first place.</p>
|
<p>You are asked to prove two things:</p>
<ul>
<li>Existance: there is such a basis, and</li>
<li>Uniqueness: there is at most one such basis.</li>
</ul>
<p>The proof you describe is only for proving uniqueness. Uniqueness is almost always easier to prove than existance.</p>
<p>To prove such a basis exists, you need to show</p>
<ol>
<li>That there are $3$ polynonomials $p_1, p_2, p_3$ in $P_2(\Bbb R)$ such that $$\begin{array} {ccc} p_1(0) = 1 & p_1(1) = 0 & p_1(2) = 0\\p_2(0) = 0 & p_2(1) = 1 & p_2(2) = 0\\p_3(0) = 0 & p_3(1) = 0 & p_3(2) = 1\end{array}$$</li>
<li>That $\{p_1, p_2, p_3\}$ is linearly independent</li>
<li>That the span of $\{p_1, p_2, p_3\}$ is all of $P_2(\Bbb R)$.</li>
</ol>
<p>So start with this: if $p \in P_2(\Bbb R)$, then $p(x) = ax^2 + bx + c$ for some $a, b, c$. If $p(0) = r, p(1) = s, p(2) = t$, then we have the following linear system of equations in the three unknowns $a, b, c$:
$$p(0) = a\cdot 0 + b \cdot 0 + c = r\\p(1) = a\cdot 1 + b \cdot 1 + c = s\\p(2) = a\cdot 4 + b \cdot 2 + c = t$$
This system is non-degenerate and so has a unique solution for any given $r, s, t$, which you should be able to easily produce.</p>
<p>This gives you directly the existence of the three polynomials $p_1, p_2, p_3$. It also shows that every polynomial in $P_2(\Bbb R)$ is in their span, for if $q \in P_2(\Bbb R)$, then $q$ is the unique polynomial $p$ above when $r = q(0), s = q(1), t = q(2)$. But note that $q(0)p_1 + q(1)p_2 + q(2)p_3$ is also a polynomial in $P_2(\Bbb R)$ with the same values at $x = 0,1,2$. Hence it must be $q$.</p>
<p>That $p_1, p_2, p_3$ are linearly independent is obvious, as any linear combination of $p_2$ and $p_3$ must be $0$ at $x = 0$ and so cannot be $p_1$. Similarly $x = 2$ and $x = 3$ show that $p_2$ and $p_3$ cannot be written as linear combinations of the other two either.</p>
<p>Lastly, if $q_1, q_2, q_3$ where another such basis, then note that the polynomials $q_1 - p_1, q_2 - p_2, q_3 - p_3$ each have the property that their values at $0, 1,$ and $2$ are all $0$. Apply the result above again to see that each of these polynomials must be uniformly 0, which proves the uniqueness.</p>
| 72
|
linear-algebra
|
Eigenvalues of Linear Operator given by Conjugation by an Invertible Matrix
|
https://math.stackexchange.com/questions/1955403/eigenvalues-of-linear-operator-given-by-conjugation-by-an-invertible-matrix
|
<p>I am working on a review problem for comp/qual studying and I cannot figure it out. The hint provided seems to give some intuition, but I don't see how it generalizes.</p>
<blockquote>
<p>Let <span class="math-container">$A \in GL(n,\mathbb{C})$</span> be an <span class="math-container">$n \times n$</span> invertible matrix with distinct eigenvalues <span class="math-container">$\lambda_1,...,\lambda_n$</span>. Let <span class="math-container">$V$</span> be the <span class="math-container">$n^2$</span>-dimensional vectorspace of <span class="math-container">$n \times n$</span> matrices over <span class="math-container">$\mathbb{C}$</span> and consider the linear operator <span class="math-container">$T : V \to V$</span> given by <span class="math-container">$T(M) = A^{-1}MA$</span>. Find the eigenvalues of <span class="math-container">$T$</span>.</p>
<p>(Hint: reduce to the case where <span class="math-container">$A$</span> is a diagonal matrix).</p>
</blockquote>
<p>I believe that in the diagonal case, an eigenvalue is <span class="math-container">$1$</span>, but I don't see anymore. Furthermore, I don't know how to generalize that. I know that <span class="math-container">$A$</span> is diagonalizable because it has <span class="math-container">$n$</span> distinct eigenvalues and thus, <span class="math-container">$n$</span> distinct eigenvectors so that if <span class="math-container">$U$</span> is a matrix with all of the eigenvectors of <span class="math-container">$A$</span> as its columns, then <span class="math-container">$A = UDU^{-1}$</span> where <span class="math-container">$D$</span> is a diagonal matrix with the eigenvalues of <span class="math-container">$A$</span> along the diagonal. Then <span class="math-container">$$T(M) = A^{-1}MA = (UDU^{-1})^{-1} M UDU^{-1} = UD^{-1}U^{-1} M UDU^{-1}.$$</span> I am not quite sure where to go from there though, or if this is even a good approach at all. Can anyone point me in a good direction?</p>
|
<p><strong>Hint:</strong> Let $E_{ij}$ be the matrix whose only non-zero entry is $e_{ij} = 1$. Then $E_{ij}$ is an "eigenvector" of $T$, and there are $n^2$ linearly independent such matrices.</p>
<p>Now, let $T_A(M) = A^{-1}MA$. Note that if $A = SDS^{-1}$, then
$$
T_A = (T_S)^{-1} \circ T_D \circ T_S
$$</p>
| 73
|
linear-algebra
|
U ∪W is a subspace => U is a subset of W or W is a subset of U (given that U and W are subspaces)
|
https://math.stackexchange.com/questions/2114139/u-%e2%88%aaw-is-a-subspace-u-is-a-subset-of-w-or-w-is-a-subset-of-u-given-that-u-and
|
<p>I wrote a proof of this (and yes, I proved the opposite direction, but I don't have a question about that portion), and I just want to get confirmation that I am not missing anything -- or advice on how to clean it up if it needs that.</p>
<p>Here's the proof:</p>
<blockquote>
<p>Suppose W is not a subset of U.</p>
<p>Then there exists vector w belonging to W such that vector w does not ∈ U.</p>
<p>Since U∪W is a subspace, there exists vector u belonging to U such that (vector u + vector w ) ∈ U∪W.</p>
<p>then since W and U are each subspaces, (vector u + vector w)∈ W, and (vector u + vector w)∈ U.</p>
<p>Therefore U is a subset of W.</p>
</blockquote>
<p>Can I just end it there?</p>
| 74
|
|
linear-algebra
|
Minimum value of function $f(x, y)$ if $x$ and $y$ are real numbers and no other conditions are given
|
https://math.stackexchange.com/questions/2249488/minimum-value-of-function-fx-y-if-x-and-y-are-real-numbers-and-no-other
|
<p>So, I got this problem that's been bugging me. For a quick info, I'm a 12th grader in Indonesia. The problem was given by my teacher to evaluate my understanding on inequalities. Here is the problem:</p>
<blockquote>
<p>If <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are both real numbers, what is the minimum value of the following function?</p>
<p><span class="math-container">$$\sqrt {4+y^2} + \sqrt {(x-2)^2 +(2-y)^2} + \sqrt{(4-x)^2 + 1}$$</span></p>
</blockquote>
<p>My teacher told me to use either the Schwarz inequality or the RMS-AM-GM-HM inequality. I think that you're supposed to take the inequality to find the combination of <span class="math-container">$(x, y)$</span> where the inequality becomes equality. But other than that, I have no idea to proceed.</p>
<p>Thanks for anyone who has reserved their time to clarify this problem.</p>
<p>EDIT :
I've solved this problem.</p>
|
<p>Refering to the vector form of Schwarzy's inequality, I find that :
$$\sqrt {4+y^2} + \sqrt {(x-2)^2 +(2-y)^2} + \sqrt{(4-x)^2 + 1}$$
can be broken down to :
$$\sqrt {2^2+y^2} + \sqrt {(x-2)^2 +(2-y)^2} + \sqrt{(4-x)^2 + 1^2} \ge{} \sqrt{(2 + x - 2 + 4 -x)^2 + (y + 2 - y +1)^2}$$ which can then be simplified to : $$\sqrt{4^2 + 3^2}$$ which is equal to 5.</p>
| 75
|
linear-algebra
|
Proof for vectors added to a subspace are equal iff the difference is in the subspace
|
https://math.stackexchange.com/questions/2662417/proof-for-vectors-added-to-a-subspace-are-equal-iff-the-difference-is-in-the-sub
|
<blockquote>
<p>For any vector space <span class="math-container">$V$</span>, subset <span class="math-container">$S \subseteq V $</span>, and vector <span class="math-container">$\vec{v} \in V$</span>, we define the set
<span class="math-container">$$\vec{v}+S = \{\vec{v} + \vec{x} : \vec{x}\in S\} $$</span>
Prove that <span class="math-container">$\vec{a}+W=\vec{b}+W$</span> iff <span class="math-container">$\vec{a} -\vec{b}\in W$</span>.</p>
</blockquote>
<p>What I have so far is:</p>
<blockquote>
<p>Assume <span class="math-container">$\vec{a}+W=\vec{b}+W$</span>. The a bases of the two are <span class="math-container">$\{\vec{a},\vec{a}+\vec{w_1}, ...,\vec{a}+\vec{w_n}\}$</span> and <span class="math-container">$\{\vec{b},\vec{b}+\vec{w_1}, ...,\vec{b}+\vec{w_n}\}$</span> respectively.</p>
<p>Let <span class="math-container">$\vec{w}\in \vec{a}+W$</span>. It can be written as a linear combination of the bases such that
<span class="math-container">$$ c_0(\vec{a}) + c_1(\vec{a}+\vec{w_1})+...c_n(\vec{a}+\vec{w_n}) = d_0(\vec{b}) + d_1(\vec{b}+\vec{w_1})+...d_n(\vec{b}+\vec{w_n})$$</span>
Then by subtracting the sides
<span class="math-container">$$ (c_0 + ... +c_n)\vec{a} - (d_0 + ... +d_n)\vec{b} + (c_1 - d_1)\vec{w_1}+...+(c_n-d_n)\vec{w_n}=0$$</span>
Then isolating the first two terms
<span class="math-container">$$ (c_0 + ... +c_n)\vec{a} - (d_0 + ... +d_n)\vec{b} =(d_1 - c_1)\vec{w_1}+...+(d_n-c_n)\vec{w_n}$$</span></p>
</blockquote>
<p>So clearly, some linear combination of <span class="math-container">$\vec{a}$</span> and <span class="math-container">$\vec{b}$</span> are in <span class="math-container">$W$</span>, but how can I conclude that <span class="math-container">$\vec{a}-\vec{b}$</span> is also in <span class="math-container">$W$</span> since the two terms have different coefficients so I can't factor it out?</p>
<p>Follow up to Martin's comment:</p>
<p>If <span class="math-container">$\vec{a} \in \vec{b} +W$</span>, then for any <span class="math-container">$\vec{x} \in \vec{a}+W$</span>, it can be written as <span class="math-container">$\vec{x} = \vec{a} + \vec{w}, \vec{w} \in W$</span> by definition of adding a vector to a subspace. Since <span class="math-container">$\vec{a}, \vec{w} \in \vec{b}+W$</span>, <span class="math-container">$\vec{x} \in \vec{b}+W$</span>.</p>
|
<p>You are making it too complicated. If $a+W=b +W$, then there exist $u,v\in W$ such that $a+u=b+v$. So $a-b=v-u\in W$.</p>
<p>Conversely, if $a-b\in W$, there exists $w\in W$ with $a-b=w$. Then $a=b+w\in b+W$, so $a+W\in b+W$; and the reverse inclusion follows but reversing roles.</p>
| 76
|
linear-algebra
|
Triangulation of a matrix and the eigenvalues, right?
|
https://math.stackexchange.com/questions/2415203/triangulation-of-a-matrix-and-the-eigenvalues-right
|
<blockquote>
<p>Find the characteristic polynomial,eigenvalues, and bases for the eigenspaces of the following matrices.</p>
<p><span class="math-container">$\begin{bmatrix}4&0&1\\-2&1&0\\-2&0&1\end{bmatrix}$</span></p>
</blockquote>
<p>We know that <span class="math-container">$\det(tI-A)=0$</span>, if A is a matrix.</p>
<p><span class="math-container">$A=\begin{bmatrix}4&0&1\\-2&1&0\\-2&0&1\end{bmatrix}=\begin{bmatrix}6&0&0\\-2&1&0\\-2&0&1\end{bmatrix}$</span> by subtracting the last row to the first.</p>
<p>Since we are dealing with a triangular matrix</p>
<p><span class="math-container">$tI-A=\begin{bmatrix}t-6&0&0\\-2&t-1&0\\-2&0&t-1\end{bmatrix}$</span></p>
<p><span class="math-container">$\det(tI-A)=(t-6)(t-1)(t-1)$</span></p>
<p>According to the Solution of <em><strong>Solution Manual for Lang´s Linear Algebra, Rami Shakarchi:</strong></em></p>
<p>we have</p>
<blockquote>
<p><span class="math-container">$\det(tI-A)=(t-4)(t-1)^2+2(t-1)=(t-1)[t^2-5t+6t]$</span></p>
<p>so the eigenvalues are <span class="math-container">$1,2$</span> and <span class="math-container">$3$</span>.</p>
</blockquote>
<p><strong>Question:</strong></p>
<p>Assuming that the solution is right. What am I doing wrong? Cannot I use the triangulation of a matrix? Is it right?</p>
<p>Thanks in advance!</p>
|
<p>The eigenvalues of a matrix are not invariant under elementary row operations. They are however invariant under similarity transformation. </p>
| 77
|
linear-algebra
|
Distributing Basis Coordinates
|
https://math.stackexchange.com/questions/2671045/distributing-basis-coordinates
|
<blockquote>
<p>Let <span class="math-container">$V$</span> be a finite-dimensional vector space with (ordered) basis <span class="math-container">$\beta=(b_1,...,b_n)$</span>, and let <span class="math-container">$T:V\rightarrow V$</span> be a linear transformation. Let <span class="math-container">$B=[T]_\beta$</span> by the <span class="math-container">$\beta$</span>-matrix of <span class="math-container">$T$</span></p>
<p>Prove that for all <span class="math-container">$\vec{v}\in V, v\in$</span> ker<span class="math-container">$(T)$</span> iff <span class="math-container">$[\vec{v}]_\beta \in$</span> ker <span class="math-container">$(B)$</span>.</p>
</blockquote>
<p>The proof I came up with includes a step that I'm not sure I'm allowed to do</p>
<blockquote>
<p><span class="math-container">$T\vec{v}=\vec{0}$</span></p>
<p><span class="math-container">$[T\vec{v}]_\beta=[\vec{0}]_\beta$</span></p>
<p><span class="math-container">$[T]_\beta\ [\vec{v}]_\beta=\vec{0}$</span></p>
</blockquote>
<p>I'm unsure if I can distribute the <span class="math-container">$[\ ]_\beta$</span> between the two. I feel like I can since it would perform the same operation, but is there a proof that this distribution property applies?</p>
|
<p>Yes $[Tv]_\beta = [T]_\beta[v]_\beta$. Note that for $v = a_1b_1 + \cdots + a_nb_n \in V$ we have:
$$
\begin{align}
[Tv]_\beta &= [a_1T(b_1) + \cdots + a_nT(b_n)]_\beta \\ &= a_1[T(b_1)]_\beta + \cdots + a_n[T(b_n)]_\beta
\end{align}
$$</p>
<p>The last equality follows because choosing a basis defines an isomorphism. In particular, the map $[\cdot]_\beta: V \to \mathbb{R}^n$ which sends $v = a_1b_1 + \cdots + a_nb_n$ to $[v]_\beta = (a_1, \ldots , a_n)^T$ is $\mathbb{R}$-linear.</p>
<p>On the other hand, we have:</p>
<p>$$
\begin{align}
[T]_\beta[v]_\beta &= ([T(b_1)]_\beta, \ldots , [T(b_n)]_\beta)\cdot (a_1, \ldots , a_n)^T \\ & =a_1[T(b_1)]_\beta + \cdots + a_n[T(b_n)]_\beta
\end{align}
$$</p>
<p>As you can see, both give the same expression.</p>
| 78
|
linear-algebra
|
Are the set of vectors linearly dependent?
|
https://math.stackexchange.com/questions/2761116/are-the-set-of-vectors-linearly-dependent
|
<blockquote>
<p>Are the set of vectors linearly dependent?</p>
<ol>
<li><p><span class="math-container">$ \{ e^{x}, e^{-x}\} $</span> in <span class="math-container">$\mathcal{F} (\mathbb{R} ,\mathbb{R} )$</span></p>
</li>
<li><p><span class="math-container">$ \{ \frac{1}{x-1}, \frac{1}{x + 1} \} $</span> in <span class="math-container">$ \mathcal{F} (]-1,1[,\mathbb{R})$</span></p>
</li>
</ol>
</blockquote>
<hr />
<ol>
<li>For <span class="math-container">$a,b \in \mathbb{R}$</span>,</li>
</ol>
<p><span class="math-container">$ae^{x} + be^{-x} = 0$</span></p>
<p>We know that: <span class="math-container">$e^x > 0 $</span> for all <span class="math-container">$x \in \mathbb{R} $</span></p>
<p>I am not sure if I can past to <span class="math-container">$a = b = 0$</span>.</p>
<ol start="2">
<li>For <span class="math-container">$a,b \in \mathbb{R}$</span></li>
</ol>
<p><span class="math-container">$\frac{a}{x-1} + \frac{b}{x + 1} = 0 \implies x(a+b) + a - b =0$</span></p>
<p>Which does not tell anything about <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
|
<p>Guide:</p>
<p>Try to substitute some value of $x$, for example, in the first example, we can let $x=0$ and we obtain $a+b=0$, try to obtain another condition for $a$ and $b$ by letting $x$ equal to another value and then you can solve for $a$ and $b$.</p>
<blockquote class="spoiler">
<p> yes, they are linearly independent.</p>
</blockquote>
| 79
|
linear-algebra
|
Theorems on 1-to-1 and onto linear functions
|
https://math.stackexchange.com/questions/2442512/theorems-on-1-to-1-and-onto-linear-functions
|
<p>I'm given two theorems:</p>
<blockquote>
<p>Theorem (1)</p>
<p>Following statements are equivalent:</p>
<p><span class="math-container">$(i)$</span> <span class="math-container">$L : V \to U$</span> where <span class="math-container">$V$</span> and <span class="math-container">$U$</span> are vector spaces, is one to one</p>
<p><span class="math-container">$(ii)$</span> Null <span class="math-container">$L = 0$</span></p>
<p><span class="math-container">$(iii)$</span> Determinant of a maximal square matrix is non-zero</p>
<p>Theorem (2)</p>
<p>Following statements are equivalent:</p>
<p><span class="math-container">$(i)$</span> <span class="math-container">$L : V \to U$</span> where <span class="math-container">$V$</span> and <span class="math-container">$U$</span> are vector spaces, is onto.</p>
<p><span class="math-container">$(ii)$</span> Null <span class="math-container">$L$</span> is a graph of some of the variables of <span class="math-container">$V$</span> in others</p>
<p><span class="math-container">$(iii)$</span> Determinant of a maximal square matrix is non-zero</p>
</blockquote>
<p>What exactly is the "maximal square matrix"? I kind of assumed that it is one of sub matrices of a matrix where it's square with the maximum possible number of rows=number of columns. However, Theorems state that finding only one of these "maximal square matrix" which has a nonzero determinant is enough, which doesn't fit right to me, need some sort of an understanding.</p>
<p>This condition making the <span class="math-container">$L$</span> both onto and 1-to-1 is also not convincing to me.</p>
<p>I also didn't understand what is meant by the condition "Null <span class="math-container">$L$</span> is a graph of some of the variables of <span class="math-container">$V$</span> in others".</p>
<p>Thanks in advance.</p>
| 80
|
|
linear-algebra
|
Orthogonal projection of a point into $x+y+z=0$ plane ex.
|
https://math.stackexchange.com/questions/2326429/orthogonal-projection-of-a-point-into-xyz-0-plane-ex
|
<blockquote>
<p>Let <span class="math-container">$T:\mathbb{R}^3\to W$</span> be the orthogonal projection of <span class="math-container">$\mathbb{R}^3$</span> onto the plane <span class="math-container">$W$</span> having the equation <span class="math-container">$x+y+z=0$</span>.</p>
<p>(a)Find <span class="math-container">$T(3,8,4)$</span>.</p>
<p>(b)Find the formula for <span class="math-container">$T$</span>.</p>
</blockquote>
<p>I have been stuck on this exercise for hours... How can I solve it?</p>
<p>Thanks in advance!</p>
|
<p>Let $P(a,b,c)\in\mathbb{R}^3$. The line which passes through $P$ and is orthogonal to $W$ is </p>
<p>$$\vec{r}=(a,b,c)+t(1,1,1)=(a+t,b+t,c+t)$$</p>
<p>At the intersection of the line and $W$ (which is $T(P)$),</p>
<p>\begin{align}
a+t+b+t+c+t&=0\\
t&=\frac{-1}{3}(a+b+c)
\end{align}</p>
<p>So, $$T(a,b,c)=\left(\frac{2a-b-c}{3},\frac{-a+2b-c}{3},\frac{-a-b+2c}{3}\right)$$</p>
| 81
|
linear-algebra
|
$\mathscr{M}_{\beta´}^{\beta}(id)$ in $\mathbb{R}^3$
|
https://math.stackexchange.com/questions/2335624/mathscrm-beta%c2%b4-betaid-in-mathbbr3
|
<blockquote>
<p>In each one of the following cases, find <span class="math-container">$\mathscr{M}_{\beta´}^{\beta}(id)$</span>. >The vector space in each case is <span class="math-container">$\mathbb{R}^3$</span>.</p>
<p>a) <span class="math-container">$\beta=\{(1,1,0),(-1,1,1),(0,1,2)\}\\\beta´={(2,1,1),(0,0,1),(-1,1,1)}$</span></p>
</blockquote>
<p><strong>Questions</strong>:</p>
<p>How can I solve this exercise? Am I supposed to find a matrix?</p>
<p>Thanks in advance!</p>
|
<p>Here's a first step:</p>
<p>Let $\beta = \{v_1,v_2,v_3\}$ and $\beta' = \{w_1,w_2,w_3\}$. We note that
$$
\operatorname{id}(v_1) = v_1 = \frac 23 w_1 + (-1)w_2 + \frac 13 w_3
$$
As such, we will find that
$$
\mathcal M^{\beta}_{\beta'}(\operatorname{id}) = \pmatrix{2/3 &?&?\\-1&?&?\\1/3&?&?}
$$</p>
| 82
|
linear-algebra
|
Checking For Orthogonality
|
https://math.stackexchange.com/questions/2449427/checking-for-orthogonality
|
<blockquote>
<p>Let <span class="math-container">$C[-1,1]$</span> and <span class="math-container">$f:[-1,1]\to \mathbb{C}$</span> with the inner product <span class="math-container">$\langle f,g\rangle=\int_{-1}^{1}f(x)\overline{g(x)}dx$</span></p>
<p>Prove: <span class="math-container">$P_{0}=1, P_{1}=x, P_{2}=1-3x^2$</span> is orthogonal in <span class="math-container">$C[-1,1]$</span></p>
</blockquote>
<p>All <span class="math-container">$\langle P_{0},P_{1}\rangle,\langle P_{0},P_{2}\rangle,\langle P_{2},P_{1}\rangle$</span> are equal to zero, so they are orthogonal to each other, in the book that took the norm of each <span class="math-container">$P_{i}$</span> to show it is not zero, why should we do it? is it to check that all <span class="math-container">$P_{i}\neq 0$</span> because if they were it is "trivial" orthogonal system?</p>
|
<p>Actually, it all depends on what you admit first.</p>
<p>Recall that an inner product must verify $\langle v,v \rangle = 0 \Leftrightarrow v = 0$ for all $v$ in the vector space.</p>
<p>So, if you admit the application you defined here is an inner product, then you know that for any function $f$ different from $0$, you will have $\langle f,f \rangle \not = 0$, and you do not need to verify it.</p>
<p>I find it quite weird that the book checks this afterwards. If it was a problem, the exercise should have asked you to show $\int_{-1}^1 f(x)\overline{g(x)}\mathrm{d}x$ defines an inner product. Otherwise, asking you to prove $P_0, P_1$ and $P_2$ are orthogonal make no sense, you need an inner product to give sense to the orthogonality (actually, you don't really, but I don't think going into quadratic forms is the problem here).</p>
| 83
|
linear-algebra
|
Showing Positive Definiteness In Inner Product
|
https://math.stackexchange.com/questions/2449525/showing-positive-definiteness-in-inner-product
|
<blockquote>
<p>Let <span class="math-container">$\mathcal{P}_2$</span> be the space of all polynomials of degree less or equal to <span class="math-container">$2$</span> for all <span class="math-container">$f,g\in \mathcal{P}_2$</span> we define:</p>
<p><span class="math-container">$$\langle f,g \rangle=\int_0^\infty f(x)g(x)e^{-x} \,dx$$</span></p>
<p>Prove it is a inner product on <span class="math-container">$\mathcal{P}_2$</span></p>
</blockquote>
<p>It is easy to show that <span class="math-container">$\langle f,g \rangle=\langle g,f \rangle$</span> and <span class="math-container">$\langle \alpha f+\beta g,h \rangle=\alpha\langle f,h \rangle+\beta\langle g,h \rangle$</span></p>
<p>To show positive definiteness we take <span class="math-container">$\langle f,f \rangle=\int_0^\infty [f(x)]^2 e^{-x} \, dx$</span> but how can we be sure that <span class="math-container">$\langle f,f \rangle\geq 0$</span> and <span class="math-container">$\langle f,f \rangle=0\iff f=0$</span>?</p>
|
<p>We know that $e^{-x}> 0$ and $(f(x))^2\geq 0$. Thus, $\displaystyle\int_{0}^{\infty} (f(x))^2 e^{-x}\geq 0$ and this function is the product of continuous functions, therefore, is continuous. Then, $\displaystyle\int_{0}^{\infty} (f(x))^2 e^{-x}=0$ if and only if $(f(x))^2 e^{-x}=0$ if and only if $(f(x))^2=0$ if and only if $f(x)=0$</p>
| 84
|
linear-algebra
|
Is this map linear?
|
https://math.stackexchange.com/questions/2486437/is-this-map-linear
|
<p>Is this map linear?</p>
<h2><span class="math-container">$T(x_1,x_2)=(x_1+2x_2+3,x_2+2x_1,3x_1)$</span></h2>
<p>Thank you very much! I thought it is not linear because there is a constant, which causes <span class="math-container">$T(v)+T(u)$</span> not to equal to <span class="math-container">$T(v+u)$</span>.</p>
|
<p>Let $u=(x_1,y_1)$ and $v=(x_2,y_2)$.</p>
<p>For a transformation to be linear, two conditions must hold: $T(u+v)=T(u)+T(v)$ and $T(cu)=cT(u)$ for $c \in \mathbb{R}$ (a quicker, one step test is to check $T(c_1u+c_2v)=c_1T(u)+c_2T(v)$)</p>
<p>Consider $T(cu)$</p>
<p>$T(cu)=T((cx_1,cy_1))=(cx_1+2cy_1+3,cy_1+2cx_1,3cx_1)$
$cT(u)=cT((x_1,y_1))=(cx_1+2cy_1+3c,cy_1+2cx_1,3cx_1)$</p>
<p>These differ in the first component, therefore the transformation isn’t linear.</p>
| 85
|
linear-algebra
|
$ix+y-z=0\\iy+z=0$ basis
|
https://math.stackexchange.com/questions/2361760/ixy-z-0-iyz-0-basis
|
<blockquote>
<p>Find the dimension over <span class="math-container">$\mathbb{C}$</span> of the space of solutions of the following systems of equations. Also find a basis for this space of solutions.</p>
<p><span class="math-container">$ix+y-z=0\\iy+z=0$</span></p>
</blockquote>
<p>Using the formula <span class="math-container">$\text{row rank}+\dim \text{space of solutions}=n$</span>, in which n is the number of variables.</p>
<p>I figured out that <span class="math-container">$\dim \text{space of solutions}=1$</span></p>
<p>However I cannot find the basis.</p>
<p><strong>Questions</strong>:</p>
<p>What is the basis of solutions of this system of equations?</p>
|
<p>Since the so called "space of solutions" has the dimension $1$, any element in the space forms a basis for the space because the solutions of this system of equations represents a line, and any point in a line can be written as a linear combination of another line.</p>
<p><strong>Edit:</strong></p>
<p>From the equation 2, $-iy = z$, and substitute this into $Eq.1$, you will get a line equation. the points on which will be your solutions.
For example $(1,\frac{1}{i-1}, \frac{1}{i+1})$</p>
| 86
|
linear-algebra
|
Derivatives $f:\mathbb{R}^n\to\mathbb{R}$ quadratic form?
|
https://math.stackexchange.com/questions/2374055/derivatives-f-mathbbrn-to-mathbbr-quadratic-form
|
<p>Let <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R}$</span> be a twice continuously differentiable function such that <span class="math-container">$f(tX)=t^2f(X)$</span> for all <span class="math-container">$X\in\mathbb{R}$</span>. Show that <span class="math-container">$f$</span> is a quadratic form.(You need some formulas of calculus in several variables to do this.)</p>
<p>Quadratic form definition:</p>
<blockquote>
<p>Let <span class="math-container">$V$</span> be a finite dimensional space over the field <span class="math-container">$K$</span>. Let <span class="math-container">$g=\langle ,\rangle$</span> be a symmetric bilinear form on <span class="math-container">$V$</span>. By the quadratic form determined by <span class="math-container">$g$</span>, we shall mean a function:</p>
<p><span class="math-container">$f:V\to K$</span></p>
<p>such that <span class="math-container">$f(v)=g(v,v)=\langle v,v\rangle$</span></p>
</blockquote>
<p>I think I need to get a bilinear form and prove <span class="math-container">$f$</span> fulfils its properties. I have the following formula <span class="math-container">$g(x,y)=\frac{1}{2}(f(x+y)-f(x)-f(y))$</span>, and it is clear <span class="math-container">$g(x,x)=f(x)$</span>.</p>
<p><strong>Question</strong>:</p>
<p>How am I supposed to prove this theorem?</p>
|
<p>$f(tx)=t^2f(x)$</p>
<p>Let's take derivatives with respect to $t$. We denote by $D_i$ the derivative with respect to $x_i$. </p>
<blockquote>
<p>On the right-hand side the derivative is $2tf(x)$ since $f(x)$ is just a constant with respect to $t$. On the left-hand side we need to apply the chain rule in several variables. $D_t(f(tx))=\sum_i D_t(tx_i)(D_if)(tx)=\sum_ix_i(D_if)(tx)$. </p>
</blockquote>
<p>$\sum_i x_i(D_if)(tx)=2tf(x)$</p>
<p>and again</p>
<p>$\sum_{i,j}x_ix_j(D_{i,j}f)(tx)=2f(x)$</p>
<p>Now put $t=0$.</p>
<p>This means that $f(x)=\sum_{i,j}C_{i,j}x_ix_j$, where $C_{i,j}=\frac{1}{2}(D_{i,j}f)(0)$.</p>
| 87
|
linear-algebra
|
Let $T : \mathbb R^m \to \mathbb R^n$ be a linear transformation, prove that...
|
https://math.stackexchange.com/questions/1144193/let-t-mathbb-rm-to-mathbb-rn-be-a-linear-transformation-prove-that
|
<blockquote>
<p>Let <span class="math-container">$T : \mathbb R^m \to \mathbb R^n$</span> be a linear transformation.</p>
<ol>
<li>Prove that <span class="math-container">$T$</span> is injective if and only if for every linearly independent set <span class="math-container">$\{\overrightarrow v_1,\ldots,\overrightarrow v_k\}$</span> in <span class="math-container">$\mathbb R^m$</span>, the set <span class="math-container">$\{T(\overrightarrow v_1),\ldots,T(\overrightarrow v_k)\}$</span> is linearly independent in <span class="math-container">$\mathbb R^n$</span>.</li>
</ol>
</blockquote>
<p>How would I even start this? Would I need to go both ways for this proof?
Would the first step be to write the definition of a injective and linearly independent set?</p>
|
<p><strong>Hint</strong>: take $v_1,\dots,v_k$ to be a set of linearly independent vectors. Suppose that $c_1,\dots,c_k$ are scalars such that
$$
c_1T(v_1) +\cdots+c_k T(v_k)= 0
$$
If $T$ is injective, show that each $c_i$ must be zero. </p>
<p>If $T$ is not injective, show that we can find a non-zero $v$ so that $T(v)=0$. The set $\{v\}$ is linearly independent.</p>
| 88
|
linear-algebra
|
Show equality of two krylov spaces
|
https://math.stackexchange.com/questions/3721106/show-equality-of-two-krylov-spaces
|
<p>A bijective funktion f:V->V and a m-krylov-space K_m(f,v)=spann{v,f(v),..,f^(m-1)(v)} are given.</p>
<p>We have to show f(K_m(f,v))=K_m(f,v), so practically spann{v,f(v),..,f^(m-1)(v)}=spann{f(v),f^2(v),..,f^m(v)} if i understand it correctly.
I don't quite see how i can use the bijectivity of f here</p>
| 89
|
|
linear-algebra
|
Prove that exists a linear transformation base on kernel and image
|
https://math.stackexchange.com/questions/3724841/prove-that-exists-a-linear-transformation-base-on-kernel-and-image
|
<p>Prove that exists a linear transformation <span class="math-container">$T: (Z_5)^4 \to (Z_5)^4$</span> such that:</p>
<p><span class="math-container">\begin{align}
\operatorname{Im}T &= \operatorname{Sp}\{(1,1,-1,1),(0,3,-2,2)\},\\ \operatorname{Ker}T &= \operatorname{Sp}\{(1,1,-1,1),(3,0,4,4),(3,3,2,-2)\}
\end{align}</span></p>
<p>Any direction?</p>
<p>Thanks.</p>
|
<p>There is no such transformation because of dimensionality problems, as you pointed out yourself. Here is what you would do if there did exist a linear transformation, maybe you have copied the vectors wrong?</p>
<p>Extract a basis from the image and kernel.</p>
<p>When done, write the kernel as the span of two linearly independent vectors, and find 2 vectors which are linearly independent of those. Since elements in the kernel are mapped to 0, define <span class="math-container">$T$</span> on the spanning vectors in the kernel to be <span class="math-container">$0$</span>, and define <span class="math-container">$T$</span> on the two other vectors to be the spanning vectors of the image.</p>
<p>You now have a function defined on a basis of <span class="math-container">$(Z_5)^4$</span>, which you can linearly extend to the whole space. Per linearity of this map, it has the right kernel and image.</p>
| 90
|
linear-algebra
|
Prove a space is a solution for homogeneous system
|
https://math.stackexchange.com/questions/3725444/prove-a-space-is-a-solution-for-homogeneous-system
|
<p><strong>All the question is above <span class="math-container">$Z_7$</span>.</strong></p>
<p>I have a space:</p>
<p><span class="math-container">$$
U = \{(1,-1,1,2),(3,0,2,1)\} \subseteq (Z_7)^4
$$</span></p>
<p>I need to prove that <span class="math-container">$U$</span> is the solution space for:</p>
<p><span class="math-container">$$
*\left\{\begin{matrix}
2x-y-3z = 0 \\
x+5y-z-t = 0
\end{matrix}\right.
$$</span></p>
<p>I tried to find a general solution for <span class="math-container">$(*)$</span>, didnt get exactly the vectors in <span class="math-container">$U$</span>.</p>
<p>Is there is a more nice way to prove than very complicated calculations that i probably make mistakes in?</p>
<p>Thanks.</p>
|
<p>The general solution is not very long to find, from the RREF of the matrix of the linear system:
<span class="math-container">\begin{align}
\begin{pmatrix}
1&-1&1&2\\3&0&2&1
\end{pmatrix}\xrightarrow{R_2 \leftarrow 2 R_2}
&\begin{pmatrix}
1&-1&1&2\\-1&0&-3&2
\end{pmatrix}&\xrightarrow{R_2 \leftarrow-(R_1+ R_2)}
\begin{pmatrix}
1&-1&1&2\\ 0&1&2&3
\end{pmatrix} \\[1ex]
{}\xrightarrow{R_1\leftarrow R_1+ R_2 }
&\begin{pmatrix}
1&0&3&-2\\ 0&1&2&3
\end{pmatrix}
\end{align}</span>
Therefore, the solutions are parameterised by
<span class="math-container">\begin{cases}
x=-3z+2t, \\y =-2z-3t
\end{cases}</span>
For well-chosen values of <span class="math-container">$z$</span> and <span class="math-container">$t$</span>, you obtain the given solutions.</p>
| 91
|
linear-algebra
|
Solving a single variable equation
|
https://math.stackexchange.com/questions/3726276/solving-a-single-variable-equation
|
<p>can someone help me in seperating x in the below equation?<span class="math-container">$$\frac{x-x_l}{\sqrt{(x-x_l)^2+y_l^2}} = f*\frac{x-x_w}{\sqrt{(x-x_w)^2+y_w^2}}$$</span> <span class="math-container">$$x_l, x_w, y_l, y_w, f\ are \ constants $$</span></p>
<p>I am trying convert this into following form:</p>
<p>x = some expression</p>
| 92
|
|
linear-algebra
|
Does the inner product of two vectors in a real vector space have to be real?
|
https://math.stackexchange.com/questions/3727136/does-the-inner-product-of-two-vectors-in-a-real-vector-space-have-to-be-real
|
<p>My teacher made this statement in an email to me:</p>
<p>"For a complex LVS, the inner product between two vectors |u<span class="math-container">$\gt$</span> and |v<span class="math-container">$\gt$</span>,</p>
<p><span class="math-container">$\lt$</span>u|v<span class="math-container">$\gt$</span> is a complex number by definition and it satisfies:</p>
<p><span class="math-container">$\lt$</span>u|v<span class="math-container">$\gt$</span>* = <span class="math-container">$\lt$</span>v|u<span class="math-container">$\gt$</span></p>
<p><strong>For a real vector space, a similar inner product between two vectors |u<span class="math-container">$\gt$</span> and |v<span class="math-container">$\gt$</span>, <span class="math-container">$\lt$</span>u|v<span class="math-container">$\gt$</span> , is a real number by definition."</strong></p>
<p>This is the <a href="https://mathworld.wolfram.com/RealVectorSpace.html" rel="nofollow noreferrer">definition</a> of a real vector space from Wolfram Mathworld:</p>
<blockquote>
<p>A real vector space is a vector space whose field of scalars is the field of reals</p>
</blockquote>
<p>According to this definition I think that <span class="math-container">$\Bbb{C}_n$</span> is an LVS over real numbers. Hence I feel that inner product of two vectors can be a complex number even for a real LVS.</p>
<p>Who is correct me or my teacher? If my teacher is correct then why?</p>
| 93
|
|
linear-algebra
|
What is the sum of the $n^2$ terms obtained this way?
|
https://math.stackexchange.com/questions/3730158/what-is-the-sum-of-the-n2-terms-obtained-this-way
|
<p>We multiply each entry of an <span class="math-container">$n × n $</span> matrix A by the cofactor belonging to it. What is the sum of the
<span class="math-container">$n^2$</span> terms obtained this way?</p>
<p>I dont understand how a cofactor can belong it entry. Does it mean one cofactor is same for each entry of according row or column?(according to Expansion theorem).</p>
|
<p>By definition the determinant of a matrix is given by the sum of the entries of any row or column multiplied by their cofactors. So the result you want is just <span class="math-container">$n\cdot\det{(A)}$</span> by calculating the determinant over every row or every column and adding the results.</p>
| 94
|
linear-algebra
|
Question on proof of rank nullity theorem on Wikipedia
|
https://math.stackexchange.com/questions/3732100/question-on-proof-of-rank-nullity-theorem-on-wikipedia
|
<p>In the first proof at <a href="https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem" rel="nofollow noreferrer">Wikipedia</a></p>
<p>In order to write <span class="math-container">$|T(S)| = n - k$</span> in the last line of the proof, I believe we need to know each of the <span class="math-container">$n-k$</span> images of <span class="math-container">$S$</span> under <span class="math-container">$T$</span> are distinct. How do we know that? Where is that demonstrated in the proof?</p>
<p>If that fact is indeed missing, how would I show it. All I can think of is it is a consequence of the linear independence of <span class="math-container">$T(S)$</span>, but I cannot see how to proceed from there.</p>
| 95
|
|
linear-algebra
|
An $ n × n $ matrix $A$ satisfies $A^2 = 0 $. Can the rank of $A$ be $n$?
|
https://math.stackexchange.com/questions/3732811/an-n-%c3%97-n-matrix-a-satisfies-a2-0-can-the-rank-of-a-be-n
|
<p>An <span class="math-container">$ n × n $</span> matrix <span class="math-container">$A$</span> satisfies <span class="math-container">$A^2 = 0 $</span>. Can the rank of <span class="math-container">$A$</span> be <span class="math-container">$n$</span>?</p>
<p>My opinion is that <span class="math-container">$A^2 = 0 $</span> then <span class="math-container">$A$</span> is also <span class="math-container">$0$</span>. Is it correct?</p>
|
<p>As pointed out in the comments, <span class="math-container">$A^2=0$</span> does not necessarily imply that <span class="math-container">$A=0$</span>.</p>
<p>Regarding your first question, if <span class="math-container">$A^2=0$</span> we have <span class="math-container">$0=\det(A^2)=(\det A)^2$</span>. Since the determinant of <span class="math-container">$A$</span> vanishes, <span class="math-container">$A$</span> cannot have rank <span class="math-container">$n$</span> by definition. And actually the rank of <span class="math-container">$A$</span> must be <span class="math-container">$\leq \frac n2$</span> (see the comments again).</p>
| 96
|
linear-algebra
|
A question on multiplying matrices by scalars
|
https://math.stackexchange.com/questions/3733010/a-question-on-multiplying-matrices-by-scalars
|
<p>The the product of two matrices AB is defined if and only if the number of columns <em>n</em> in A equals the number of rows <em>m</em> in B. But what if A is a 1x1 matrix (i.e. a scalar) and B is some <em>m</em> x <em>n</em> matrix where <em>m</em> > 1?</p>
| 97
|
|
linear-algebra
|
Help with Jacobi's formula
|
https://math.stackexchange.com/questions/3733059/help-with-jacobis-formula
|
<p>I want to prove that</p>
<p><span class="math-container">$$
\frac d{dt}\left( \det e^\mathbf {At}\right)= \text{Tr}\left(\mathbf A \right)\det e^\mathbf {At}\ ,
$$</span>
I have to use Jacobi's formula <span class="math-container">$
\frac d{dt}\left( \det \mathbf A\right)H= \text{Tr}\left( \text{adj} (\mathbf A ) H\right)\ ,
$</span>
but I need some hints.</p>
<p>Also, is there an easy way to understand how the formula
<span class="math-container">$
\det\left( e^{\mathbf A} \right) = e^{\text{Tr}(\mathbf A)}\
$</span> works? How is this formula derived?</p>
| 98
|
|
linear-algebra
|
How to construct a linear system that has no sol
|
https://math.stackexchange.com/questions/3724488/how-to-construct-a-linear-system-that-has-no-sol
|
<p>Construct a linear system that has no solution. Unknown Variable counts must be more than the equation count. Is it possible an equation like this? What are needed?</p>
|
<p>Here is an example which might be useful.</p>
<p><span class="math-container">$$
\begin{bmatrix}
1 & 1 & 1\\
2 & 2 & 2\\
\end{bmatrix}
\mathbf{x}
=
\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}
$$</span></p>
<p>Mainly, the rank of the matrix is 1 which is less than the dimension of the right hand side (which is two). Therefore, you can find a right hand side which doesn't have a solution vector <span class="math-container">$\mathbf{x}$</span>.</p>
<p>I hope this helps.</p>
| 99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.