An illustration of a Reflection across the green line
Let $A$ be a $3\times 3$ matrix. Last few days we studied the following matrix initial value problem:
It is a fact from differential equation that the solution of this initial value problem is unique. The unique solution of the above boxed initial value problem is called the matrix-exponential function: \[ Y(t) = {\Large e}^{A\mkern 0.75mu t}. \]
It is important to notice that from the boxed equation evaluated at $t=0$ we obtain \[ Y'(0) = A\mkern 1mu Y(0) = A\mkern 1mu I_3 = A . \]The exponential notation introduced in the previous item is analogous to what we learned in a calculus class. Let $a$ be a real number. The unique solution $y(t)$ of the initial \[ y'(t) = a\mkern 0.75mu y(t) \quad \text{and} \quad y(0) = 1 \] is the scalar exponential function \[ y(t) = e^{a\mkern 0.75mu t}. \]
Yesterday I posted about the matrix-valued exponential function $t\mapsto e^{At}$. Here $A$ is a $3\times 3$ matrix. We defined the matrix-valued exponential function $t\mapsto e^{At}$ as to be the unique solution $Y(t)$ of the following problem:
The goal of Problem 5 on Assignment 1 is to explore analogous problem in which real number $a$ is substituted with a diagonalizable $3\times 3$ matrix $A$.
In analogy with the scalar case, we define the matrix-valued exponential function $t\mapsto e^{At}$ to be the unique solution $Y(t)$ of the following problem:
(n-by-n matrix M) (k-th column of the n-by-n identity matrix) = (k-th column of the n-by-n matrix M).
Place the cursor over the image to start the animation.
Below is a proof that the monomials $1, x, x^2, x^3$ are linearly independent in the vector space ${\mathbb P}_3$. First we need to be specific what we need to prove.
Let $\alpha_1,$ $\alpha_2,$ $\alpha_3,$ and $\alpha_4$ be scalars in $\mathbb{R}.$ We need to prove the following implication: If \[ \require{bbox} \bbox[5px, #88FF88, border: 1pt solid green]{\alpha_1\cdot 1 + \alpha_2 x + \alpha_3 x^2 + \alpha_4 x^3 =0 \quad \text{for all} \quad x \in \mathbb{R}}, \] then \[ \bbox[5px, #FF4444, border: 1pt solid red]{\alpha_1 = 0, \quad \alpha_2 =0, \quad \alpha_3 = 0, \quad \alpha_4 = 0}. \] Proof.
Definition. A function $f$ from $A$ to $B$, $f:A\to B$, is called surjection if it satisfies condition the following condition:
Definition. A function $f$ from $A$ to $B$, $f:A\to B$, is called injection if it satisfies the following condition
An equivalent formulation of the preceding condition is:
Definition. A function $f:A\to B$ is called bijection if it satisfies the following two conditions:
In other words, a function $f:A\to B$ is called bijection if it is both an injection and a surjection.
Definition. Let $\mathcal V$ and $\mathcal W$ be vector spaces. A linear bijection $T: \mathcal V \to \mathcal W$ is said to be an isomorphism.
Theorem 8. Let $n \in \mathbb{N}$. Let $\mathcal{B} = \{\mathbf{b}_1, \ldots, \mathbf{b}_n\}$ be a basis of a vector space $\mathcal V$. The coordinate mapping \[ \mathbf{v} \mapsto [\mathbf{v}]_\mathcal{B}, \qquad \mathbf{v} \in \mathcal V, \] is a linear bijection between the vector space $\mathcal V$ and the vector space $\mathbb{R}^n.$
Theorem 8. Let $n \in \mathbb{N}$. Let $\mathcal{B} = \{\mathbf{b}_1, \ldots, \mathbf{b}_n\}$ be a basis of a vector space $\mathcal V$. The coordinate mapping \[ \mathbf{v} \mapsto [\mathbf{v}]_\mathcal{B}, \qquad \mathbf{v} \in \mathcal{V}, \] is an isomorphism between the vector space $\mathcal V$ and the vector space $\mathbb{R}^n.$
Corollary 1. Let $m, n \in \mathbb{N}$. Let $\mathcal{B} = \{\mathbf{b}_1, \ldots, \mathbf{b}_n\}$ be a basis of a vector space $\mathcal V$. Then the following statements are equivalent:
Corollary 2. Let $m, n \in \mathbb{N}$. Let $\mathcal{B} = \{\mathbf{b}_1, \ldots, \mathbf{b}_n\}$ be a basis of a vector space $\mathcal V$. Then the following statements are equivalent:
The role of \(\mkern1mu \cdot\mkern1mu\) in this notation is that is stands for any index, in the first column, the second index is \(1\), while the second one can be any of allowed positive integers \(\{1,2,3\}\), indicated by \(\mkern1mu \cdot\mkern1mu\) as a wildcard index.
The reason I use capital Roman letters for rows and columns, and lowercase letters for entries, is that rows and columns are also matrices—special matrices, but matrices nonetheless—while the entries are scalars, specifically real numbers. It is important to make a clear distinction between these objects: matrices and scalars.
Each step in a row reduction can be achieved by multiplication by a matrix.
Step | the row operations | the matrix used | the matrix inverse |
---|---|---|---|
1st | $\mkern 5mu \begin{array}{l} \sideset{_n}{_1}R \to \sideset{_o}{_1}R, \\ \sideset{_n}{_2}R \to (-2)\sideset{_o}{_1}R + \sideset{_o}{_2}R,\\ \sideset{_n}{_3}R \to (-3)\sideset{_o}{_1}R + \sideset{_o}{_3}R \end{array} $ | $E_1 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -3 & 0 & 1 \\ \end{array}\right]$ | $(E_1)^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 3 & 0 & 1 \\ \end{array}\right]$ |
2nd | $\mkern 5mu \begin{array}{l} \sideset{_n}{_1}R \to \sideset{_o}{_1}R, \\ \sideset{_n}{_2}R \to \sideset{_o}{_2}R, \\ \sideset{_n}{_3}R \to (-2)\sideset{_o}{_2}R + \sideset{_o}{_3}R \end{array}$ | $E_2 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{array}\right]$ | $(E_2)^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{array}\right]$ |
The importance of the careful keeping track of each matrix is that we can calculate which single matrix performs the above Row Reduction:
\[ M = E_2 E_1 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -3 & 0 & 1 \\ \end{array}\right] = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 1 & -2 & 1 \\ \end{array} \right] \] You can verify that \[ MA = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 1 & -2 & 1 \\ \end{array} \right] \left[\!\begin{array}{rrrrr} 1 & 2 & 0 & 1 & 2 \\ 2 & 4 & 1 & 0 & 5 \\ 3 & 6 & 2 & -1 & 8 \end{array}\right] = \left[\!\begin{array}{rrrrr} 1 & 2 & 0 & 1 & 2 \\ 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right] = B. \]Each step in a row reduction can be achieved by multiplication by a matrix.
Step | the row operations | the matrix used | the matrix inverse |
---|---|---|---|
1st | $ \mkern 5mu \begin{array}{l} \sideset{_n}{_1}R \to \sideset{_o}{_3}R, \\ \sideset{_n}{_2}R\to\frac{1}{2}\sideset{_o}{_2}R,\\ \sideset{_n}{_3}R \to \frac{1}{3} \sideset{_o}{_1}R \end{array}$ | $E_1 = \left[\!\begin{array}{rrr} 0 & 0 & 1 \\ 0 & \frac{1}{2} & 0 \\ \frac{1}{3} & 0 & 0 \\ \end{array}\right]$ | $(E_1)^{-1} = \left[\!\begin{array}{rrr} 0 & 0 & 3 \\ 0 & 2 & 0 \\ 1 & 0 & 0 \\ \end{array}\right]$ |
2nd | $\mkern 5mu \begin{array}{l} \sideset{_n}{_1}R \to \sideset{_o}{_1}R, \\ \sideset{_n}{_2}R \to (-1)\sideset{_o}{_1}R+\sideset{_o}{_2}R, \\ \sideset{_n}{_3}R \to (-1)\sideset{_o}{_1}R + \sideset{_o}{_3}R \end{array}$ | $E_2 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right]$ | $(E_2)^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right]$ |
3rd | $\mkern 5mu \begin{array}{l} \sideset{_n}{_1}R \to \sideset{_o}{_1}R + 2 \sideset{_o}{_2}R, \\ \sideset{_n}{_2}R \to (-1)\sideset{_o}{_2}R, \\ \sideset{_n}{_3}R \to -\frac{5}{3} \sideset{_o}{_2}R + \sideset{_o}{_3}R \end{array} $ | $E_3 = \left[\!\begin{array}{rrr} 1 & 2 & 0 \\ 0 & -1 & 0 \\ 0 & -\frac{5}{3} & 1 \end{array}\right]$ | $(E_3)^{-1} = \left[\!\begin{array}{rrr} 1 & 2 & 0 \\ 0 & -1 & 0 \\ 0 & -\frac{5}{3} & 1 \end{array}\right]$ |
4th | $\mkern 5mu \begin{array}{l} \sideset{_n}{_1}R \to \sideset{_o}{_1}R,\\ \sideset{_n}{_2}R \to \sideset{_o}{_2}R + (-3)\sideset{_o}{_3}R, \\ \sideset{_n}{_3}R \to 6 \sideset{_o}{_3}R \end{array}$ | $E_4 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & -3 \\ 0 & 0 & 6 \end{array}\right]$ | $(E_4)^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & \frac{1}{2} \\ 0 & 0 & \frac{1}{6} \end{array}\right]$ |
The importance of the careful keeping track of each matrix is that we can calculate which single matrix performs the above Row Reduction:
\[ M = E_4 E_3 E_2 E_1 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & -3 \\ 0 & 0 & 6 \end{array}\right]\left[\!\begin{array}{rrr} 1 & 2 & 0 \\ 0 & -1 & 0 \\ 0 & -\frac{5}{3} & 1 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right] \left[\!\begin{array}{rrr} 0 & 0 & 1 \\ 0 & \frac{1}{2} & 0 \\ \frac{1}{3} & 0 & 0 \\ \end{array}\right] = \left[ \begin{array}{rrr} 0 & 1 & -1 \\ -1 & 2 & -1 \\ 2 & -5 & 4 \\ \end{array} \right] \] You can verify that \[ MA = \left[ \begin{array}{rrr} 0 & 1 & -1 \\ -1 & 2 & -1 \\ 2 & -5 & 4 \\ \end{array} \right] \left[\!\begin{array}{rrrrr} 3 & 1 & 5 & 1 & 2 \\ 2 & 2 & 2 & 1 & 4 \\ 1 & 2 & 0 & 1 & 5 \end{array}\right] = \left[\!\begin{array}{rrrrr} 1 & 0 & 2 & 0 & -1 \\ 0 & 1 & -1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 4 \end{array}\right] = B. \]It is important to observe that forming the above three linear combinations is straightforward because of the positioning of the leading $1$s in the rows of the RREF. I tried to emphasise this by
I wanted to provide a colorful beginning of the New Academic Year and the Fall Quarter. So I started the class by talking about the colors in relation to linear algebra. I love the application of vectors to COLORS so much that I wrote a webpage to celebrate it: Color Cube.
It is important to point out that in the red-green-blue coloring scheme, the following eighteen colors stand out. I present them in six steps with three colors in each step.
You:
Can you please write a complete LaTeX file with instructions on using basic mathematical operations, like fractions, sums, integrals, basic functions, like cosine, sine, and exponential function, and how to structure a document and similar features? Please explain the difference between the inline and displayed mathematical formulas. Please include examples of different ways of formatting displayed mathematical formulas. Please include what you think would be useful to a mathematics student. Also, can you please include your favorite somewhat complicated mathematical formula as an example of the power of LaTeX? I emphasize I want a complete file that I can copy into a LaTeX compiler and compile into a pdf file. Please ensure that your document contains the code for the formulas you are writing, which displays both as code separately from compiled formulas. Also, please double-check that your code compiles correctly. Remember that I am a beginner and cannot fix the errors. Please act as a concerned teacher would do.
This is the LaTeX document that ChatGPT produced base on the above prompt. Here is the compiled PDF document.
You can ask ChatGPT for specific LaTeX advise. To get a good response, think carefully about your prompt. Also, you can offer to ChatGPT a sample of short mathematical writing from the web or a book as a PNG file and it convert that writing to LaTeX. You can even try with neat handwriting. The results will of course depend on the clarity of the file, ChatGPT makes mistakes, but I found it incredibly useful.
A student asked me about the technology that we will use in this class. I am open minded about that. You can use any technology that you are familiar with. I know and use only Wolfram Mathematica. All the calculations and mathematical illustrations on my website are done using Wolfram Mathematica.
Computer algebra system Mathematica is very useful in for explorations in mathematics. To get started with Mathematica see my Mathematica page. Please watch the videos that are on my Mathematica page. Watching the movies is very helpful to get started with Mathematica efficiently! Mathematica is available in the computer labs in BH 215 and BH 209.