Assume that $\alpha_1,$ $\alpha_2,$ and $\alpha_3$ are scalars in $\mathbb{R}$ such that \[ \require{bbox} \bbox[5px, #88FF88, border: 1pt solid green]{\alpha_1\cdot 1 + \alpha_2 x + \alpha_3 x^2 =0 \quad \text{for all} \quad x \in \mathbb{R}}. \] The objective here is to prove \[ \bbox[5px, #FF4444, border: 1pt solid red]{\alpha_1 = 0, \quad \alpha_2 =0, \quad \alpha_3 = 0}. \] Consider the left-hand side of the above green identity as a function of $x$ and take the derivative with respect to $x$. We obtain \[ \bbox[5px, #88FF88, border: 1pt solid green]{\alpha_2 + 2 \alpha_3 x =0 \quad \text{for all} \quad x \in \mathbb{R}}. \] Again, consider the left-hand side of the above green identity as a function of $x$ and take the derivative with respect to $x$. We obtain \[ \bbox[5px, #88FF88, border: 1pt solid green]{2 \alpha_3 =0 \quad \text{for all} \quad x \in \mathbb{R}}. \] Substituting $x=0$ in the first two green identities and dividing the third green equality by $2$ we obtain \[ \bbox[5px, #88FF88, border: 1pt solid green]{\alpha_1 = 0, \quad \alpha_2 =0, \quad \alpha_3 = 0}. \] In this way we have greenifyed the red statement. That is, we proved it.
Assume that $\alpha_1,$ $\alpha_2,$ and $\alpha_3$ are scalars in $\mathbb{R}$ such that \[ \require{bbox} \bbox[5px, #88FF88, border: 1pt solid green]{\alpha_1\cdot 1 + \alpha_2 x + \alpha_3 x^2 =0 \quad \text{for all} \quad x \in \mathbb{R}}. \] The objective here is to prove \[ \bbox[5px, #FF4444, border: 1pt solid red]{\alpha_1 = 0, \quad \alpha_2 =0, \quad \alpha_3 = 0}. \] The above green identity holds for all $x\in\mathbb{R}.$ In particular it holds for specific $x=-1,$ $x=0,$ and $x=1.$ That is, we have \[ \bbox[5px, #88FF88, border: 1pt solid green]{ \begin{array}{lr} \alpha_1 - \alpha_2 +\alpha_3 &=0 \\ \alpha_1 &=0 \\ \alpha_1 + \alpha_2 +\alpha_3 &=0 \\ \end{array} } \] The last green box contains a homogeneous system of linear equations which can be written in a matrix form as \[ \bbox[5px, #88FF88, border: 1pt solid green]{ \left[\!\begin{array}{rrr} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\!\right] \left[\!\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{array}\!\right] = \left[\!\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\!\right] } \] Since the determinant of the above $3\!\times\!3$ matrix is $2$, the above homogeneous equation has only the trivial solution. That is, \[ \bbox[5px, #88FF88, border: 1pt solid green]{\alpha_1 = 0, \quad \alpha_2 =0, \quad \alpha_3 = 0}. \] In this way we have greenifyed the red statement. That is, we proved it.
Definition. A nonempty set $\mathcal{V}$ is said to be a vector space over $\mathbb R$ if it satisfies the following ten axioms.
Explanation of the abbreviations: AE--addition exists, AA--addition is associative, AC--addition is commutative, AZ--addition has zero, AO--addition has opposites, SE-- scaling exists, SA--scaling is associative, SD--scaling distributes over addition of real numbers, SD--scaling distributes over addition of vectors, SO--scaling with one.
Theorem. Let $n \in \mathbb{N}$ and let $A$ be an $n\!\times\!n$ matrix. The matrix $A$ is diagonalizable if and only if there exists a basis of $\mathbb{R}^n$ which consists of eigenvectors of $A.$
Theorem. Let $n \in \mathbb{N}$ and let $A$ be an $n\!\times\!n$ matrix. The following two statements are equivalent:
(a) There exist an invertible $n\!\times\!n$ matrix $P$ and a diagonal $n\!\times\!n$ matrix $D$ such that $A= PDP^{-1}.$
(b) There exist linearly independent vectors $\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n$ in $\mathbb{R}^n$ and real numbers $\lambda_1, \lambda_2,\ldots,\lambda_n$ such that $A \mathbf{v}_k = \lambda_k \mathbf{v}_k$ for all $k\in \{1,\ldots,n\}.$
Place the cursor over the image to start the animation.
All entries left blank in the determinant below are zeros.
Click on the image for a step by step proof.
I came, I thought, I wrote, I taught, I thought more, I rewrote.
The above words are inspired by the Latin phrase "veni, vidi, vici" which is in the list of Latin phrases very close to another Latin phrase celebrating writing:
Verba volant, scripta manent.
(Spoken words fly away, written words stay.)
Step | the row operation | the elementary matrix | the inverse of elementary matrix |
---|---|---|---|
1st | The third row is replaced by the the sum of the first row and the third row | $E_1 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right]$ | $E_1^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right]$ |
2nd | The third row and the second row are interchanged | $E_2 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right]$ | $E_2^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right]$ |
3rd | The third row is replaced by the the sum of the third row and the second row multiplied by $(-2)$ | $E_3 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{array}\right]$ | $E_3^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{array}\right]$ |
4th | The first row is replaced by the the sum of the first row and the third row | $E_4 = \left[\!\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]$ | $E_4^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]$ |
If RREF of $A$ is $I_3$, then $A$ is invertible.
This implication is proved in Theorem 7 in Section 2.2 . This proof is important!