Math 342 - Week 5 Notes

Mon, Feb 14

Today we started a review of linear algebra. We began with this warm-up problem:

  1. Suppose you have a jar full of pennies, nickles, dimes, and quarters. There are 80 coins in the jar, and the total value of the coins is $10.00. If there are twice as many dimes as quarters, then how many of each type of coin are in the jar?

You can answer this question by row-reducing the augmented matrix

\[\left( \begin{array}{cccc|c} 1 & 1 & 1 & 1 & 80 \\ 1 & 5 & 10 & 25 & 1000 \\ 0 & 0 & 1 & -2 & 0\end{array}\right)\]

which can be put into echelon form

\[\left( \begin{array}{cccc|c} 1 & 1 & 1 & 1 & 80 \\ 0 & 4 & 9 & 24 & 920 \\ 0 & 0 & 1 & -2 & 0\end{array}\right)\]

Then the variables \(p, n\), and \(d\) are pivot variables, and the last variable \(q\) is a free variable.


Major Linear Algebra Concepts and Terminology

A subspace of a vector space (like \(\mathbb{R}^n\)) is a set that is closed under addition and scaling. The span of a set is the smallest subspace that contains the set. A set \(S\) in a vector space is linearly independent if you cannot express the zero vector as a non-trivial linear combination of the vectors in \(S\). A basis is a linear independent set that spans the whole space. The dimension of a vector space is the number of elements in a basis.

A matrix \(A \in \mathbb{R}^{m \times n}\) is a linear transformation with domain \(\mathbb{R}^n\) and the range is a subspace of \(\mathbb{R}^m\) (\(\mathbb{R}^m\) is the codomain, but it might not be the range). The matrix \(A\) has four fundamental subspaces:

For a subspace \(V \subseteq \mathbb{R}^n\), the orthogonal complement of \(V\) is the subspace \[V^\perp = \{ x \in \mathbb{R}^n : x^T y = 0 \text{ for all } y \in V \}.\] Any vector in \(\mathbb{R}^n\) can be expressed uniquely as the sum of a vector in \(V\) and a vector in \(V^\perp\) and \(\dim V + \dim V^\perp = n\).

The Fundamental Theorem of Linear Algebra. For \(A \in \mathbb{R}^{m \times n}\), \[\dim \operatorname{range}(A) = \dim \operatorname{range}(A^T) = \# \text{ of pivots of } A.\] Furthermore, \[\operatorname{range}(A) = \operatorname{nullspace}(A^T)^\perp \text{ and } \operatorname{range(A^T)} = \operatorname{nullspace}(A)^\perp.\]

Note: The number of pivots of \(A\) is called the rank of \(A\).


Wed, Feb 16

Today we introduced the LU decomposition of a matrix. Here is a YouTube video on LU decomposition. We did the following examples in class:

  1. \(\displaystyle\begin{pmatrix} 1 & 1 & 1 & 1 \\ 2 & 2 & 5 & 3 \\ -1 & -1 & 14 & 4 \end{pmatrix}\).

Another good example is this one (which is covered in the YouTube link above):

  1. \(\displaystyle\begin{pmatrix} 1 & 0 & 1 \\ a & a & a \\ b & b & a \end{pmatrix}\).

After those examples, we did this in-class lab:


Fri, Feb 18

Today we talked about what it means for a linear system to be ill-conditioned. This is when a small change in the vector \(b\) can produce a large change in the solution vector \(x\) for a linear system \(Ax=b\).

Consider the following matrix:

\[A = \begin{pmatrix} 1 & 1 \\ 1 & 1.001 \end{pmatrix}\]

Let \(y = \begin{pmatrix} 2 \\ 2 \end{pmatrix}\) and \(z = \begin{pmatrix} 2 \\ 2.001 \end{pmatrix}\)

  1. Solve \(Ax = y\) and \(Ax = z\). Notice that even though \(y\) and \(z\) are very close, the two solutions are not close at all. A matrix \(A\) with the property that solutions of \(Ax = b\) are very sensitive to small changes in \(b\) is called ill-conditioned.

Consider the matrix \(B = \begin{pmatrix} 0.001 & 1 \\ 1 & 1 \end{pmatrix}\). This matrix is not ill-conditioned itself, but you have to be careful using row reduction to solve equations with this matrix:

  1. Find the LU-decomposition for \(B\). Use the decomposition to solve \(Bx = \begin{pmatrix} 1 \\ 2 \end{pmatrix}.\)

The LU-decomposition is \(L = \begin{pmatrix} 1 & 0 \\ 1000 & 1 \end{pmatrix}\) and \(U = \begin{pmatrix} 0.001 & 1 \\ 0 & -999 \end{pmatrix}\).

To solve the system,

  1. First, solve \(Ly = \begin{pmatrix} 1 \\ 2 \end{pmatrix}\) to get \(y = \begin{pmatrix} 1 \\ -998 \end{pmatrix}\).

  2. Then, solve \(Ux = y\). You should get \(x = \begin{pmatrix} 1.001001 \\ 0.998999 \end{pmatrix}\) by solving the system \[0.001x_1 + x_2 = 1,\] \[-999 x_2 = -998.\] If you solve this system, it is easy to make a rounding mistake and get \(x_2 = 1\) instead of \(\frac{998}{999}\). If that happens, then you’ll get \(x_1 = 0\) instead of its actual value.

So although \(B\) is not ill-conditioned, both \(L\) and \(U\) are. If you use \(L\) and \(U\) to solve \(Bx = b\) and make a rounding mistake along the way, that mistake can blow up because the intermediate matrices \(L\) and \(U\) are ill-conditioned.