First we give the definition of the Reduced Row Echelon Form of a matrix.
Definition.. Each zero matrix is in Reduced Row Echelon Form. A nonzero matrix is in Reduced Row Echelon Form if it satisfies the following three conditions:
For a given matrix $A$, the objective of the row reduction algorithm is to find the unique matrix in RREF which is row equivalent to the given matrix $A$.
It is always a good idea to keep record of the elementary row operations that have been used to achieve the RREF. Below we give a specific matrix $A$ and its Reduced Row Echelon Form (RREF):
\begin{equation} \label{rref} \tag{RREF} \require{bbox} A = \left[\! \begin{array}{rrrrrr} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} & \bbox[lightblue]{\begin{array}{c} 4 \\ 3 \\ 2 \\ 1 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{c} 6 \\ 4 \\ 8 \\ 6 \end{array}} \end{array} \!\right] \sim \cdots \sim \left[\! \begin{array}{rrrrrr} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{r} -1 \\ 5 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 0 \end{array}} \end{array} \!\right]. \end{equation}The columns in the RREF whose only nonzero entry is $1,$ that is the columns of the RREF that are also the columns of the identity matrix $I_4,$ are called the pivot columns of the RREF. The corresponding columns of the given matrix $A$ are called the pivot columns of $A.$ In the above presentation the pivot columns are colored yellow.
In the above presentation, the nonpivot columns of the RREF of $A$ and the nonpivot columns of $A$ are colored light blue. When we use the RREF to solve the homogeneous equation $A\mathbf{x} = \mathbf{0},$ then the blue columns correspond to the free variables.
The reduced row echelon form of the matrix $A$ provides a treasure trove of information about the matrix $A.$
Performing elementary row operations is equivalent to multiplying $A$ by elementary matrices. Call $R$ the $4\!\times\!4$ matrix which is the product of the elementary matrices that we used to get RREF of $A$. Then \begin{equation} \label{eq:RARR} \tag{RARR} \underbrace{\left[\!\begin{array}{rrrr} -\frac{1}{2} & 0 & \frac{1}{2} & 0 \\ 1 & 1 & -1 & 0 \\ \frac{1}{2} & -1 & \frac{1}{2} & 0 \\ 1 & -1 & -1 & 1 \\ \end{array} \!\right]}_{R} \underbrace{\left[\!\begin{array}{ccccc} 1 & 1 & 4 & 1 & 6 \\ 2 & 1 & 3 & 0 & 4 \\ 3 & 1 & 2 & 1 & 8 \\ 4 & 1 & 1 & 0 & 6 \\ \end{array} \!\right]}_{A} = \underbrace{\left[\!\begin{array}{ccrcrc} 1 & 0 & -1 & 0 & 1 \\ 0 & 1 & 5 & 0 & 2 \\ 0 & 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \!\right]}_{RREF} \end{equation}
The pivot columns of $A$ are linearly independent.
Performing the identical elementary row operations as those performed on the matrix $A$ we would get the following row equivalent matrices: \begin{equation*} \require{bbox} \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} \end{array} \!\right] \sim \cdots \sim \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{r} 0 \\ 0 \\ 1 \\ 0 \end{array}} \end{array} \!\right]. \end{equation*} Preceding row reduction proves that the pivot columns of $A$ are linearly independent.
The nonpivot columns of $A$ are linear combinations of the pivot columns
The third column of $A$. Performing the identical elementary row operations as those performed on the matrix $A$ we would get the following row equivalent matrices: \begin{equation} \label{rr1} \tag{RREF1} \require{bbox} \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} & \bbox[lightblue]{\begin{array}{c} 4 \\ 3 \\ 2 \\ 1 \end{array}} \end{array} \!\right] \sim \cdots \sim \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{r} -1 \\ 5 \\ 0 \\ 0 \end{array}} \end{array} \!\right]. \end{equation} We can view the last matrix as the augmented matrix of a nonhomogeneous linear vector equation. The above provided $\eqref{rr1}$ gives us the solution of that vector equation. That solution gives the third column of $A$ expressed as a linear combination of the first two columns. Notice that the coefficients in the linear combination come from the third column of the RREF of $A.$ \begin{equation*} \require{bbox} \left[\! \begin{array}{r} \bbox[lightblue]{\begin{array}{c} 4 \\ 3 \\ 2 \\ 1 \end{array}} \end{array} \!\right] = \bbox[lightblue]{(-1)} \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} \end{array} \!\right] + \bbox[lightblue]{(5)} \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} \end{array} \!\right] \end{equation*}
The fifth column of $A$. Performing the identical elementary row operations as those performed on the matrix $A$ we would get the following row equivalent matrices: \begin{equation} \label{rr2} \tag{RREF2} \require{bbox} \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{c} 6 \\ 4 \\ 8 \\ 6 \end{array}} \end{array} \!\right] \sim \cdots \sim \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{r} 1 \\ 2 \\ 3 \\ 0 \end{array}} \end{array} \!\right]. \end{equation} We can view the preceding matrix on the left as the augmented matrix of a nonhomogeneous linear vector equation. The above provided $\eqref{rr2}$ gives us the solution of that vector equation. In this way, we obtain the fifth column of $A$ expressed as a linear combination of the first, the second and the fourth column of $A.$ Notice that the coefficients in the linear combination come from the fifth column of the RREF of $A.$ \begin{equation*} \require{bbox} \left[\! \begin{array}{r} \bbox[lightblue]{\begin{array}{c} 6 \\ 4 \\ 8 \\ 6 \end{array}} \end{array} \!\right] = \bbox[lightblue]{(1)} \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} \end{array} \!\right] + \bbox[lightblue]{(2)} \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} \end{array} \!\right] + \bbox[lightblue]{(3)} \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} \end{array} \!\right] \end{equation*}
The pivot columns of $A$ form a basis for the column space of $A.$ Hence $\dim \operatorname{Col}(A) = 3.$ The column space of $A$ is a three-dimensional subspace of $\mathbb{R}^4.$
Let us introduce notation for the columns of $A$: \[ \require{bbox} \bbox[yellow]{\mathbf{a}_1} = \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} \end{array} \!\right], \quad \bbox[yellow]{\mathbf{a}_2} = \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} \end{array} \!\right], \quad \bbox[lightblue]{\mathbf{a}_3} = \left[\! \begin{array}{r} \bbox[lightblue]{\begin{array}{c} 4 \\ 3 \\ 2 \\ 1 \end{array}} \end{array} \!\right], \quad \bbox[yellow]{\mathbf{a}_4} = \left[\! \begin{array}{r} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} \end{array} \!\right], \quad \bbox[lightblue]{\mathbf{a}_5} = \left[\! \begin{array}{r} \bbox[lightblue]{\begin{array}{c} 6 \\ 4 \\ 8 \\ 6 \end{array}} \end{array} \!\right] \] Recall that the column space of $A$ is the span of the columns of $A$: \[ \operatorname{Col}(A) = \operatorname{Span}\bigl\{\bbox[yellow]{\mathbf{a}_1}, \bbox[yellow]{\mathbf{a}_2}, \bbox[lightblue]{\mathbf{a}_3}, \bbox[yellow]{\mathbf{a}_4}, \bbox[lightblue]{\mathbf{a}_5} \bigr\}. \]
Recall that \[ \bbox[lightblue]{\mathbf{a}_3} = (-1)\bbox[yellow]{\mathbf{a}_1} + 5 \bbox[yellow]{\mathbf{a}_2} + 0 \bbox[yellow]{\mathbf{a}_4}, \quad \bbox[lightblue]{\mathbf{a}_5} = 1\bbox[yellow]{\mathbf{a}_1} + 2 \bbox[yellow]{\mathbf{a}_2} + 3 \bbox[yellow]{\mathbf{a}_4}. \] Can you justify that a consequence of the preceding two equalities is the following equality for two spans: \[ \operatorname{Span}\bigl\{\bbox[yellow]{\mathbf{a}_1}, \bbox[yellow]{\mathbf{a}_2}, \bbox[lightblue]{\mathbf{a}_3}, \bbox[yellow]{\mathbf{a}_4}, \bbox[lightblue]{\mathbf{a}_5} \bigr\} = \operatorname{Span}\bigl\{\bbox[yellow]{\mathbf{a}_1}, \bbox[yellow]{\mathbf{a}_2}, \bbox[yellow]{\mathbf{a}_4}\bigr\}. \] Hence the pivot columns of $A$ span the column space of $A$: \[ \operatorname{Col}(A) = \operatorname{Span}\bigl\{\bbox[yellow]{\mathbf{a}_1}, \bbox[yellow]{\mathbf{a}_2}, \bbox[yellow]{\mathbf{a}_4}\bigr\}. \]
Since the pivot columns are linearly independent and the pivot columns span $\operatorname{Col}(A),$ the pivot columns of the matrix $A$ form a basis for the column space of $A.$
The linear combinations that we found in the Second Reading can be summarized as one matrix multiplication: The matrix product of the $4\!\times\!3$ matrix consisting of the pivot columns of $A$ with the $3\!\times\!5$ matrix consisting of the nonzero rows of the RREF of $A$ equals the given matrix $A.$ \begin{equation*} \require{bbox} \left[\! \begin{array}{rrr} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} \end{array} \!\right] \left[\! \begin{array}{rrrrr} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{r} -1 \\ 5 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{r} 0 \\ 0 \\ 1 \end{array}} & \bbox[lightblue]{\begin{array}{c} 1 \\ 2 \\ 3 \end{array}}\end{array} \!\right] = \left[\! \begin{array}{rrrrr} \bbox[yellow]{\begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array}} & \bbox[lightblue]{\begin{array}{c} 4 \\ 3 \\ 2 \\ 1 \end{array}} & \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{c} 6 \\ 4 \\ 8 \\ 6 \end{array}} \end{array} \!\right] = A. \end{equation*}
We can read the preceding matrix multiplication by focusing on the rows. Such reading tells us that the rows of the matrix $A$ are linear combinations of the the nonzero rows of the RREF of $A.$ \begin{equation*} \require{bbox} \left[\! \begin{array}{rrr} \bbox[#33FFFF]{1} & \bbox[#77FFFF]{1} & \bbox[#BBFFFF]{1} \\ \bbox[#33FFFF]{2} & \bbox[#77FFFF]{1} & \bbox[#BBFFFF]{0} \\ \bbox[#33FFFF]{3} & \bbox[#77FFFF]{1} & \bbox[#BBFFFF]{1} \\ \bbox[#33FFFF]{4} & \bbox[#77FFFF]{1} & \bbox[#BBFFFF]{0} \end{array} \!\right] \left[\! \begin{array}{c} \bbox[#33FF33]{\begin{array}{rrrrr} 1 & 0 & -1 & 0 & 1 \end{array}} \\ \bbox[#77FF77]{\begin{array}{rrrrr} 0 & 1 & \phantom{-}5 & 0 & 2 \end{array}} \\ \bbox[#BBFFBB]{\begin{array}{rrrrr} 0 & 0 & \phantom{-}0 & 1 & 3 \end{array}} \end{array} \!\right] = \left[\! \begin{array}{c} \bbox[#00FF88]{\begin{array}{ccccc} 1 & 1 & 4 & 1 & 6 \end{array}} \\ \bbox[#00FF88]{\begin{array}{ccccc} 2 & 1 & 3 & 0 & 4 \end{array}} \\ \bbox[#00FF88]{\begin{array}{ccccc} 3 & 1 & 2 & 1 & 8 \end{array}} \\ \bbox[#00FF88]{\begin{array}{ccccc} 4 & 1 & 1 & 0 & 6 \end{array}} \end{array} \!\right] = A. \end{equation*}
We illustrate the above claim by showing that the third row of $A$ is a linear combination of the nonzero rows of RREF of $A$ with the coefficients coming from the third row of the matrix consisting of the pivot columns of $A$: \begin{align*} & \bbox[#33FFFF]{3} \bigl[ \bbox[#33FF33]{\begin{array}{rrrrr} 1 & 0 & -1 & 0 & 1 \end{array}} \bigr] \\ & \qquad\qquad + \bbox[#77FFFF]{1} \bigl[\bbox[#77FF77]{\begin{array}{rrrrr} 0 & 1 & \phantom{-}5 & 0 & 2 \end{array}}\bigr] \\ & \qquad\qquad\qquad\qquad + \bbox[#BBFFFF]{1} \bigl[ \bbox[#BBFFBB]{\begin{array}{rrrrr} 0 & 0 & \phantom{-}0 & 1 & 3 \end{array}} \bigr] = \bigl[ \bbox[#00FF88]{\begin{array}{ccccc} 3 & 1 & 2 & 1 & 8 \end{array}} \bigr] \end{align*} Notice that in the next reading we will write the rows of $A$ as vectors in $\mathbb{R}^5,$ that is as their transposes. This is a common practice since we want the null space of $A$ and the row space of $A$ both to be in the same vector space $\mathbb{R}^5.$
The rows of $A$ are linear combinations of the nonzero rows of the RREF of $A.$
It is convenient to introduce the notation for the rows of $A$ and the rows of the RREF of $A.$ As mentioned at the end of the preceding reading, we consider the rows of $A$ as vectors in $\mathbb{R}^5.$ That is we identify rows with their transposes. We introduce the following notation for the rows of $A$ \[ \mathbf{r}_1 = \left[\! \begin{array}{c} 1 \\ 1 \\ 4 \\ 1 \\ 6 \end{array} \!\right], \quad \mathbf{r}_2 = \left[\! \begin{array}{r} 2 \\ 1 \\ 3 \\ 0 \\ 4 \end{array} \!\right], \quad \mathbf{r}_3 = \left[\! \begin{array}{r} 3 \\ 1 \\ 2 \\ 1 \\ 8 \end{array} \!\right], \quad \mathbf{r}_4 = \left[\! \begin{array}{r} 4 \\ 1 \\ 1 \\ 0 \\ 6 \end{array} \!\right], \] and the following notation for the rows of the RREF of $A$ \[ \mathbf{q}_1 = \left[\! \begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{array} \!\right], \quad \mathbf{q}_2 = \left[\! \begin{array}{c} 0 \\ 1 \\ 5 \\ 0 \\ 2 \end{array} \!\right], \quad \mathbf{q}_3 = \left[\! \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \\ 3 \end{array} \!\right] \]
The first row of $A$. As mentioned in the previous reading, we write rows as their transposes, that is as vectors in $\mathbb{R}^5.$ So, the equation below is stating that the first row of $A$ is a linear combination of the nonzero rows of the RREF of $A.$ \begin{equation*} \require{bbox} \left[\! \begin{array}{r} \bbox[#00FF88]{\begin{array}{c} 1 \\ 1 \\ 4 \\ 1 \\ 6 \end{array}} \end{array} \!\right] = \bbox[#33FFFF]{(1)} \left[\! \begin{array}{r} \bbox[#33FF33]{\begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{array}} \end{array} \!\right] + \bbox[#77FFFF]{(1)} \left[\! \begin{array}{r} \bbox[#77FF77]{\begin{array}{c} 0 \\ 1 \\ 5 \\ 0 \\ 2 \end{array}} \end{array} \!\right] + \bbox[#BBFFFF]{(1)} \left[\! \begin{array}{r} \bbox[#BBFFBB]{\begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \\ 3 \end{array}} \end{array} \!\right] \end{equation*} Or, briefly, \[ \mathbf{r}_1 = (1) \mathbf{q}_1 + (1) \mathbf{q}_2 + (1) \mathbf{q}_3. \]
The second row of $A$. \begin{equation*} \require{bbox} \left[\! \begin{array}{r} \bbox[#00FF88]{\begin{array}{c} 2 \\ 1 \\ 3 \\ 0 \\ 4 \end{array}} \end{array} \!\right] = \bbox[#33FFFF]{(2)} \left[\! \begin{array}{r} \bbox[#33FF33]{\begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{array}} \end{array} \!\right] + \bbox[#77FFFF]{(1)} \left[\! \begin{array}{r} \bbox[#77FF77]{\begin{array}{c} 0 \\ 1 \\ 5 \\ 0 \\ 2 \end{array}} \end{array} \!\right] + \bbox[#BBFFFF]{(0)} \left[\! \begin{array}{r} \bbox[#BBFFBB]{\begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \\ 3 \end{array}} \end{array} \!\right] \end{equation*} Or, briefly, \[ \mathbf{r}_2 = (2) \mathbf{q}_1 + (1) \mathbf{q}_2 + (0) \mathbf{q}_3. \]
The third row of $A$. We wrote the following linear combination at the end of the Fourth Reading as a linear combination of rows. But, have in mind that when we write the rows separately we write them as vectors in $\mathbb{R}^5.$ \begin{equation*} \require{bbox} \left[\! \begin{array}{r} \bbox[#00FF88]{\begin{array}{c} 3 \\ 1 \\ 2 \\ 1 \\ 8 \end{array}} \end{array} \!\right] = \bbox[#33FFFF]{(3)} \left[\! \begin{array}{r} \bbox[#33FF33]{\begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{array}} \end{array} \!\right] + \bbox[#77FFFF]{(1)} \left[\! \begin{array}{r} \bbox[#77FF77]{\begin{array}{c} 0 \\ 1 \\ 5 \\ 0 \\ 2 \end{array}} \end{array} \!\right] + \bbox[#BBFFFF]{(1)} \left[\! \begin{array}{r} \bbox[#BBFFBB]{\begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \\ 3 \end{array}} \end{array} \!\right] \end{equation*} Or, briefly, \[ \mathbf{r}_3 = (3) \mathbf{q}_1 + (1) \mathbf{q}_2 + (1) \mathbf{q}_3. \]
The fourth row of $A$. \begin{equation*} \require{bbox} \left[\! \begin{array}{r} \bbox[#00FF88]{\begin{array}{c} 4 \\ 1 \\ 1 \\ 0 \\ 6 \end{array}} \end{array} \!\right] = \bbox[#33FFFF]{(4)} \left[\! \begin{array}{r} \bbox[#33FF33]{\begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{array}} \end{array} \!\right] + \bbox[#77FFFF]{(1)} \left[\! \begin{array}{r} \bbox[#77FF77]{\begin{array}{c} 0 \\ 1 \\ 5 \\ 0 \\ 2 \end{array}} \end{array} \!\right] + \bbox[#BBFFFF]{(0)} \left[\! \begin{array}{r} \bbox[#BBFFBB]{\begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \\ 3 \end{array}} \end{array} \!\right] \end{equation*} Or, briefly, \[ \mathbf{r}_4 = (4) \mathbf{q}_1 + (1) \mathbf{q}_2 + (0) \mathbf{q}_3. \]
The nonzero rows of the RREF of $A$ are linearly independent.
Here is the proof of linear independence. Let $\alpha_1, \alpha_2, \alpha_3$ be real numbers and assume that \begin{equation*} \alpha_1 \left[\! \begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{array} \!\right] + \alpha_2 \left[\! \begin{array}{r} 0 \\ 1 \\ 5 \\ 0 \\ 2 \end{array} \!\right] + \alpha_3 \left[\! \begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \\ 3 \end{array} \!\right] = \left[\! \begin{array}{r} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \!\right]. \end{equation*} Adding the vectors on the left-hand side of the preceding vector equation we get \begin{equation*} \left[\! \begin{array}{c} \alpha_1 \\ \alpha_2 \\ -\alpha_1 + 5 \alpha_2 \\ \alpha_3 \\ \alpha_1 +2 \alpha_2 + 3 \alpha_3 \end{array} \!\right] = \left[\! \begin{array}{r} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \!\right]. \end{equation*} Looking at the first, the second and the fourth component of the vectors in the preceding equality we conclude that $\alpha_1=0,$ $\alpha_2=0,$ and $\alpha_3=0.$
Each nonzero row of the RREF of $A$ is a linear combination of the rows of $A.$
The preceding claim follows from the row reduction algorithm. How did we construct the RREF of $A$? We constructed the RREF by making linear combinations of the rows of $A.$ Thus each row of the RREF is a linear combination of the rows of $A.$
However, if we want to be specific, and answer which linear combination, then we have to keep record of the row operations that we did. The easiest way to keep the record is with the product of the elementary matrices that were used. We did that in the introductory paragraph. We called that equality $\eqref{eq:RARR}$. We repeat that equality below. \begin{equation*} \left[\!\begin{array}{rrrr} -\frac{1}{2} & 0 & \frac{1}{2} & 0 \\ 1 & 1 & -1 & 0 \\ \frac{1}{2} & -1 & \frac{1}{2} & 0 \\ 1 & -1 & -1 & 1 \\ \end{array} \!\right] \left[\!\begin{array}{ccccc} 1 & 1 & 4 & 1 & 6 \\ 2 & 1 & 3 & 0 & 4 \\ 3 & 1 & 2 & 1 & 8 \\ 4 & 1 & 1 & 0 & 6 \\ \end{array} \!\right] = \left[\!\begin{array}{ccrcrc} 1 & 0 & -1 & 0 & 1 \\ 0 & 1 & 5 & 0 & 2 \\ 0 & 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \!\right] \end{equation*} Focusing on the rows of the matrices in the product, the above matrix equality can be restated as four vector equalities involving the rows of $A$ and the rows of the RREF of $A$ with the coefficients from the $4\!\times\!4$ matrix $R.$ \begin{alignat*}{4} (-\tfrac{1}{2}) \mathbf{r}_1 &+& (0) \mathbf{r}_2 &+& (\tfrac{1}{2}) \mathbf{r}_3 &+& (0) \mathbf{r}_4 & = \mathbf{q}_1 \\ (1) \mathbf{r}_1 &+& (1) \mathbf{r}_2 &+& (-1) \mathbf{r}_3 &+& (0) \mathbf{r}_4 & = \mathbf{q}_2 \\ (\tfrac{1}{2}) \mathbf{r}_1 &+& (-1) \mathbf{r}_2 &+& (\tfrac{1}{2}) \mathbf{r}_3 &+& (0) \mathbf{r}_4 & = \mathbf{q}_3 \\ (1) \mathbf{r}_1 &+& (-1) \mathbf{r}_2 &+& (-1) \mathbf{r}_3 &+& (1) \mathbf{r}_4 & = \mathbf{0} \end{alignat*}
The nonzero rows of the RREF of $A$ form a basis for the row space of $A.$ Hence $\dim \operatorname{Row}(A) = 3.$ The row space of $A$ is a three-dimensional subspace of $\mathbb{R}^5.$
Here is a proof.
I. From the Fifth Reading we see that \[ \mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3, \mathbf{r}_4 \in \operatorname{Span}\bigl\{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \bigr\}. \] Therefore \[ \operatorname{Row}(A) = \operatorname{Span}\bigl\{\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3, \mathbf{r}_4\bigr\} \subseteq \operatorname{Span}\bigl\{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \bigr\}. \]
II. From the Seventh Reading we see that \[ \mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \in \operatorname{Span}\bigl\{\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3, \mathbf{r}_4 \bigr\} = \operatorname{Row}(A). \] Therefore \[ \operatorname{Span}\bigl\{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \bigr\} \subseteq \operatorname{Span}\bigl\{\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3, \mathbf{r}_4\bigr\} = \operatorname{Row}(A). \]
III. From I and II we conclude that \[ \operatorname{Row}(A) = \operatorname{Span}\bigl\{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \bigr\}. \]
IV. From III we see that the vectors $\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3$ span $\operatorname{Row}(A).$ Since by the Sixth Reading the vectors $\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3$ are linearly independent we conclude that $\bigl\{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \bigr\}$ is a basis for $\operatorname{Row}(A).$ Therefore \[ \dim \operatorname{Row}(A) = 3. \]
The dimension of the null space of $A$ equals the number of the nonpivot columns in the RREF of $A.$ That is, the dimension of the null space of $A$ equals the number of the free variables in the homogeneous linear system which corresponds to the RREF of $A.$ In this example, $\dim \operatorname{Nul}(A) = 2.$
By definition \[ \operatorname{Nul}(A) = \bigl\{ \mathbf{x} \in \mathbb{R}^5 : A \mathbf{x} = \mathbf{0} \bigr\}. \] To find a basis for $\operatorname{Nul}(A)$ we have to find all solutions (the solution set) of the homogeneous matrix equation $A \mathbf{x} = \mathbf{0}$ and write the solution set in parametric vector form and, finally, write the solution set as a span of several vectors in $\mathbb{R}^5.$
To find all solutions $A \mathbf{x} = \mathbf{0}$ we recall that the matrix $A$ is row equivalent to its RREF. Therefore the homogeneous matrix equation $A \mathbf{x} = \mathbf{0}$ has the same solution set as the matrix equation \[ \require{bbox} \left[\! \begin{array}{rrrrrr} \bbox[yellow]{\begin{array}{c} 1 \\ 0 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 1 \\ 0 \end{array}} & \bbox[lightblue]{\begin{array}{r} -1 \\ 5 \\ 0 \end{array}} & \bbox[yellow]{\begin{array}{c} 0 \\ 0 \\ 1 \end{array}} & \bbox[lightblue]{\begin{array}{c} 1 \\ 2 \\ 3 \end{array}} \end{array} \!\right] \left[\!\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array}\!\right] = \left[\!\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\!\right] \]
To solve the preceding homogeneous matrix equation we write the corresponding system and notice that the unknowns corresponding to nonpivot columns are free variables. There are $5-3 =2$ nonpivot columns and thus two free variables. Those are $x_3$ and $x_5;$ we call them $x_3 = s_1$ and $x_5 = s_2.$ Solving the homogeneous linear system \begin{alignat*}{10} &x_1& & & && &\ + \ & \bbox[lightblue]{(-1)} &x_3& & & && &\ + \ & \bbox[lightblue]{1} &x_5& &= 0 \\ & & &\phantom{+}& &x_2 & &\ + \ & \bbox[lightblue]{5}&x_3& && && &\ + \ & \bbox[lightblue]{2}&x_5& &= 0 \\ & & & & & & & & & & &\phantom{+}& &x_4& &\ + \ & \bbox[lightblue]{3}&x_5& &= 0 \end{alignat*} yields that all solutions are given by \begin{alignat*}{4} x_1 &=& \bbox[lightblue]{1} &s_1& &\ + \ & \bbox[lightblue]{(-1)} &s_2 \\ x_2 & =& \bbox[lightblue]{(-5)}&s_1& &\ + \ & \bbox[lightblue]{(-2)}&s_2 \\ x_3 & =& &s_1& & & & \\ x_4 &=& & & &\ + \ & \bbox[lightblue]{(-3)}&s_2 \\ x_5 & =& & & & & &s_2 \end{alignat*} where $s_1$ and $s_2$ are arbitrary real numbers. The preceding five equations can be written as one vector equation: \[ \mathbf{x} = \left[\!\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array}\!\right] = s_1 \left[\!\begin{array}{r} 1 \\ -5 \\ 1 \\ 0 \\ 0 \end{array}\!\right] + s_2 \left[\!\begin{array}{r} -1 \\ -2 \\ 0 \\ -3 \\ 1 \end{array}\!\right]. \] At this point it is prudent to verify our findings. That is, verify that for $s_1=1,$ $s_2 = 0$ and $s_1=0,$ $s_2 = 1$ we really get solutions: \[ \left[\!\begin{array}{ccccc} 1 & 1 & 4 & 1 & 6 \\ 2 & 1 & 3 & 0 & 4 \\ 3 & 1 & 2 & 1 & 8 \\ 4 & 1 & 1 & 0 & 6 \\ \end{array} \!\right] \left[\!\begin{array}{r} 1 \\ -5 \\ 1 \\ 0 \\ 0 \end{array}\!\right] = \left[\!\begin{array}{r} 0 \\ 0 \\0 \\ 0 \end{array}\!\right], \qquad \left[\!\begin{array}{ccccc} 1 & 1 & 4 & 1 & 6 \\ 2 & 1 & 3 & 0 & 4 \\ 3 & 1 & 2 & 1 & 8 \\ 4 & 1 & 1 & 0 & 6 \\ \end{array} \!\right] \left[\!\begin{array}{r} -1 \\ -2 \\ 0 \\ -3 \\ 1 \end{array}\!\right] = \left[\!\begin{array}{r} 0 \\ 0 \\0 \\ 0 \end{array}\!\right]. \] Now we can be confident that our found solutions are correct. Let us introduce the notation for two specific vectors that we just verified to be solutions of $A \mathbf{x} = \mathbf{0}:$ \[ \mathbf{n}_1 = \left[\!\begin{array}{r} 1 \\ -5 \\ 1 \\ 0 \\ 0 \end{array}\!\right], \qquad \mathbf{n}_2 = \left[\!\begin{array}{r} -1 \\ -2 \\ 0 \\ -3 \\ 1 \end{array}\!\right]. \] We have proved that $\mathbf{x} \in \mathbb{R}^5$ is a solution of the homogeneous matrix equation $A \mathbf{x} = \mathbf{0}$ if and only if there exist $s_1, _2 \in \mathbb{R}$ such that \[ \mathbf{x} = s_1 \mathbf{n}_1 + s_2 \mathbf{n}_2. \] In the set notation, the preceding sentence is saying the same thing as the set equality \[ \operatorname{Nul}(A) = \operatorname{Span}\bigl\{ \mathbf{n}_1, \mathbf{n}_2 \bigr\}. \] Please prove that the vectors $\mathbf{n}_1$ and $\mathbf{n}_2$ are linearly independent, as I did in the Sixth Reading for the vectors $\mathbf{q}_1,$ $\mathbf{q}_2$ and $\mathbf{q}_3.$
Since the vectors $\mathbf{n}_1$ and $\mathbf{n}_2$ are linearly independent and they span $\operatorname{Nul}(A)$, we proved that $\bigl\{ \mathbf{n}_1, \mathbf{n}_2 \bigr\}$ is a basis for $\operatorname{Nul}(A).$ Hence \[ \dim \operatorname{Nul}(A) = 2. \] In other words, $\operatorname{Nul}(A)$ is a two-dimensional subspace of $\mathbb{R}^5.$
Statements in this section hold for a matrix $A$ of any size. Here we assume that $m,n \in \mathbb{N}$ and $A$ is an $m\!\times\!n$ matrix.
The dimension of the column space of $A$ equals the dimension of the row space of $A,$ that is \[ \dim\operatorname{Col}(A) = \dim\operatorname{Row}(A) \] This common dimension is called the rank of the matrix $A.$
The proof of the equality in the preceding paragraph comes from the RREF of $A.$ Here is the proof.
I. By the Conclusion from the First and Second Reading we have that the dimension of $\operatorname{Col}(A)$ equals the number of pivot columns in the RREF of $A.$
II. By the Conclusion from the Fifth, Sixth and Seventh Reading the dimension of $\operatorname{Row}(A)$ equals the number of the nonzero rows in the RREF of $A.$
III. The leading entry of each nonzero row the RREF of $A$ is $1$ which belongs to a unique pivot column of the RREF of $A.$ Conversely, the unique entry $1$ in each pivot column is the leading term of a unique nonzero row. This proves that we have the same number of nonzero rows and pivot columns in the RREF of $A.$ By I, II and III the equality in the preceding paragraph is proved.
The sum of the dimension of the null space of $A$ and the rank of the matrix $A$ equals $n$: \[ \dim\operatorname{Col}(A) + \dim\operatorname{Nul}(A) = \dim\operatorname{Row}(A) + \dim\operatorname{Nul}(A) = n. \]
The proof of the equality in the preceding paragraph comes from the RREF of $A.$ Here is the proof.
I. By the Conclusion from the First and Second Reading we have that the dimension of $\operatorname{Col}(A)$ equals the number of pivot columns in the RREF of $A.$
II. By the Eight Reading, the dimension of $\operatorname{Nul}(A)$ equals the number of the nonpivot columns of the RREF of $A.$
III. Each column of the RREF of $A$ is either pivor or nonpivot and the total number of columns of $A$ is $n.$ By I, II and III we have that \[ \dim\operatorname{Col}(A) + \dim\operatorname{Nul}(A) = n. \] The claim in the preceding paragraph now follows from the equality $\dim\operatorname{Col}(A) = \dim\operatorname{Row}(A)$.