Case 1. $-L \lt h \lt 0$ In this case, there is one negative eigenvalue and countably many positive eigenvalues. For a general choice of the initial shape $f(x)$ and the initial velocity $g(x)$ the string will break in this case. See the the animation below.
Place the cursor over the image to start vibrations.
However, if we set the initial velocity $g(x)= 0$ and we make a special choice of the initial shape $f(x)$ that is orthogonal to the eigenfunction corresponding to the negative eigenvalue than the string does not break. See the animation below.
Place the cursor over the image to start vibrations.
Case 2. $h = -L$ In this case, $0$ is an eigenvalue and there are countably many positive eigenvalues. For a general choice of the initial shape $f(x)$ and no initial velocity, that is $g(x) = 0,$ the string will not break. This is illustrated in the animation below with $L = \pi$ and $h=-\pi$.
Place the cursor over the image to start vibrations.
For a general choice of the initial velocity $g(x)$ the string breaks. We did not illustrate this case.
Case 3. $h \geq 0$ or $h \leq -L$ In this case, there are no negative eigenvalues, $0$ is not an eigenvalue and there are countably many positive eigenvalues. See the animation below with $L =\pi$ and $h=-4$.
Place the cursor over the image to start vibrations.
In the animation below we used $L =\pi$ and $h=1$.
Place the cursor over the image to start vibrations.
The book talks about various kinds of boundary conditions for the vibrating string equation in Section 4.2 and Section 4.3. Today I presented my way to explaining how the boundary conditions of the "third kind" or Robin boundary conditions arise naturally.
I wrote the notes about my approach the boundary conditions of the "third kind" or Robin boundary conditions. These notes present a method of solving boundary eigenvalue problems with the boundary conditions of the "third kind" or Robin boundary conditions (the boundary conditions discussed today). The relevant part from the textbook is Section 5.8.
Place the cursor over the image to start vibrations.
Notice that the red part of the string is rigid, while the orange part is governed by the vibrating string equation.
On Friday, we used Fourier's Method of Separation of Variables to find two sequences of solutions oto the Vibrating String Partial Differential Equation \[ \frac{\partial^2u}{\partial t^2}(x,t) = c^2 \frac{\partial^2 u}{\partial x^2}(x,t), \quad \text{where} \quad x \in [0,L], \ \ t \geq 0, \] subject to Dirichlet Boundary Conditions at the endpoints \(0\) and \(L\): \[ u(0,t) = 0 \quad \text{and} \quad u(L,t) = 0, \quad \text{for all} \quad t \geq 0. \]
the first harmonic or fundamental
the second harmonic
the third harmonic
the fourth harmonic
the fifth harmonic
the sixth harmonic
One particular kind of trigonometric identities are Power reduction formulas. Below I list several such formulas which involve multiple-angle sine functions in \(x\); exactly the kind that we encountered as solutions of the vibrating string equation.
In the context of Fourier series, the trigonometric identities listed below are important since they are in fact finite Fourier series (here \(L=\pi\)) for the functions appearing on the left-hand side of the identities.
\begin{align*} (\sin x)^3 & = \frac{3}{4}\sin(x) - \frac{1}{4} \sin(3 x)\\ (\sin x)^5 & = \frac{5}{8}\sin (x) - \frac{5}{16} \sin (3 x)+\frac{1}{16} \sin (5 x)\\ \bigl(\sin x\bigr)^{7} & = \frac{35}{64}\sin(x) - \frac{21}{64}\sin(3x) + \frac{7}{64}\sin(5x) - \frac{1}{64} \sin(7x), \\[6pt] (\sin x)^9 &= \frac{63}{128} \sin x - \frac{21}{64} \sin(3x) + \frac{9}{64} \sin(5x) - \frac{9}{256} \sin(7x) + \frac{1}{256} \sin(9x). \end{align*}
\begin{align*} (\sin x)(\cos x) & = \frac{1}{2} \sin (2 x) \\ (\sin x)(\cos x)^2 & = \frac{1}{4} \sin (x) + \frac{1}{4} \sin (3 x) \\ (\sin x)(\cos x)^3 & = \frac{1}{4} \sin (2 x) + \frac{1}{8} \sin (4 x) \\ (\sin x)(\cos x)^4 & = \frac{1}{8} \sin (x) + \frac{3}{16} \sin (3 x) + \frac{1}{16} \sin (5 x) \\ (\sin x)(\cos x)^5 & = \frac{5}{32} \sin (2x) + \frac{1}{8} \sin (4 x) + \frac{1}{32} \sin (6 x) \\ (\sin x)^3 & = \frac{3}{4} \sin (x) - \frac{1}{4} \sin (3 x) \\ (\sin x)^3(\cos x) & = \frac{1}{4} \sin (2 x) - \frac{1}{8} \sin (4 x) \\ (\sin x)^3 (\cos x)^2 & = \frac{1}{8} \sin (x) + \frac{1}{16} \sin (3 x) - \frac{1}{16} \sin (5 x) \\ (\sin x)^3(\cos x)^3 & = \frac{3}{32} \sin (2 x) - \frac{1}{32} \sin (6 x) \\ (\sin x)^3(\cos x)^4 & = \frac{3}{64} \sin (x) + \frac{3}{64} \sin (3 x) - \frac{1}{64} \sin (5 x) - \frac{1}{64} \sin (7 x) \\ (\sin x)^3(\cos x)^5 & = \frac{3}{64} \sin (2 x) + \frac{1}{64} \sin (4 x) - \frac{1}{64} \sin (6 x) - \frac{1}{128} \sin (8 x) \\ \end{align*}
In the context of this class, each of the above formulas gives a finite Fourier series for the function on the left-hand side of the equality sign.
Also, the above formulas, though not immediately apparent, contain numerous integrals in a disguised form. For example, \[ \int_{-\pi} ^{\pi} (\sin x)^3(\cos x)^4 \sin (3 x) dx = \frac{3 \pi}{64}, \] but also \[ \int_{-\pi} ^{\pi} (\sin x)^3(\cos x)^4 \sin (4 x) dx = 0. \] Do you see why?
Place the cursor over the image to start vibrations.
If the initial conditions are functions for which we do not have explicit formulas in terms of the multiple-angle sine functions, then we need to calculate the Fourier coefficients of the functions given as the initial conditions. That requires use of technology, and often the coefficients need to be calculated approximately.
For example, consider the following initial conditions: \begin{align*} u(x,0) & = \frac{1}{6} (\pi -x) x^2 \\[5pt] \frac{\partial u}{\partial t}(x,0) &= 0. \end{align*}
Since the initial velocity is \(0\), the solution has the form \[ u(x,t) = \sum_{k=1}^\infty c_k \cos(k \pi t) \sin(k \pi x). \] We choose the coefficients \(c_k\) such that \[ u(x,0) = \frac{1}{6} (\pi -x) x^2 = \sum_{k=1}^\infty c_k \sin(k \pi x). \] The orthogonality of the multiple-angle sine functions we calculate \[ c_k = \frac{1}{3 \pi} \int_{0}^\pi (\pi -x) x^2 \sin(k \pi x) \operatorname{d}\mkern-2mu x \] Wolfram Mathematica calculates the above integral and gives: \[ c_k = -\frac{2 \left(2 (-1)^k+1\right)}{3 k^3} \]
Thus, the solution given as an infinite sum is \[ u(x,t) = - \frac{2}{3} \sum_{k=1}^\infty \frac{2 (-1)^k+1}{k^3} \mkern 2mu \cos(k \pi t) \sin(k \pi x). \]
Choosing the partial sum with \(50\) terms of the above infinite series we get the following animation of the solution:Place the cursor over the image to start vibrations.
Consider more a complicated function for the initial displacement. For example, consider the following initial conditions: \begin{align*} u(x,0) & = \frac{4}{\pi ^2} x (\pi -x) e^{-5 (x-1)^2} \\[5pt] \frac{\partial u}{\partial t}(x,0) &= 0. \end{align*}
Since the initial velocity is \(0\), the solution has the form \[ u(x,t) = \sum_{k=1}^\infty c_k \cos(k \pi t) \sin(k \pi x). \] We choose the coefficients \(c_k\) such that \[ u(x,0) = \frac{1}{6} (\pi -x) x^2 = \sum_{k=1}^\infty c_k \sin(k \pi x). \] The orthogonality of the multiple-angle sine functions we calculate \[ c_k = \frac{2}{\pi} \int_{0}^\pi \frac{4}{\pi ^2} x (\pi -x) e^{-5 (x-1)^2} \sin(k \pi x) \operatorname{d}\mkern-2mu x \] Wolfram Mathematica cannot find the exact formulas for these coefficients. Therefore, I calculated numerical approximations for the first \(50\) coefficients and obtained an approximation for the solution. Below is the resulting animaltion:
Place the cursor over the image to start vibrations.
Today we derived the partial differential equation which models vibrations of a string. This is Section 4.2 in the book. The simplest form of this equation is \[ \frac{\partial^2u}{\partial t^2}(x,t) = c^2 \frac{\partial^2 u}{\partial x^2}(x,t) \quad \text{where} \quad x \in [0,L], \ \ t \geq 0. \] This equation is called vibrating string equation, or one-dimensional wave equation. In this equation, we think of the string in the equilibrium position is scratched along the \(x\)-axis from \(0\) to \(L\). Then, as string vibrates, the small value \(u(x,t)\) represents the displacement of the string at the position \(x \in [0,L]\) at time \(t\). The positive values of \(u(x,t)\) represent the displacement above the \(x\)-axis and the negative values of \(u(x,t)\) represent the displacement below the \(x\)-axis.
Natural boundary conditions are that the string is fixed at its endpoints \(0\) and \(L\): \[ u(0,t) = 0 \quad \text{and} \quad u(L,t) = 0 \quad \text{for all} \quad t \geq 0. \]
Set \(L = \pi\) and \(c=1\) in the vibrating string equation and consider the vibrating string equation \[ \frac{\partial^2 u}{\partial t^2}(x,t) = \frac{\partial^2 u}{\partial x^2}(x,t) \quad \text{where} \quad x \in [0,\pi], \ \ t \geq 0, \] subject to the boundary conditions \[ u(0,t) = 0 \quad \text{and} \quad u(\pi,t) = 0 \quad \text{for all} \quad t \geq 0. \]
One solution of this equation is \[ u(x,t) = (\cos t) (\sin x). \] To verify this claim calculate \begin{align*} \frac{\partial^2}{\partial t^2}\bigl( (\cos t) (\sin x) \bigr) & = - (\cos t) (\sin x) \\ \frac{\partial^2}{\partial x^2}\bigl( (\cos t) (\sin x) \bigr) & = - (\cos t) (\sin x), \end{align*} and \[ (\cos t) (\sin 0) = (\cos t) (\sin \pi ) = 0. \]
Animating the function \[ (\cos t) (\sin x) \] in time we obtain the animation below, which really resembles vibrations of a string. In the animation below we let \(t\) run from \(0\) to \(2\pi\). Since \(\cos t\) is periodic, the animation appears to continue infinitely.
Another solution of the same vibrating string equation is \[ u(x,t) = \bigl(\cos (2t) \bigr) \bigl(\sin(2x) \bigr). \] To verify this claim calculate \begin{align*} \frac{\partial^2}{\partial t^2}\Bigl( \bigl(\cos (2t) \bigr) \bigl(\sin(2x) \bigr) \Bigr) & = - 4 \bigl(\cos (2t) \bigr) \bigl(\sin(2x) \bigr) \\ \frac{\partial^2}{\partial x^2} \Bigl( \bigl(\cos (2t) \bigr) \bigl(\sin(2x) \bigr) \Bigr) &= - 4 \bigl(\cos (2t) \bigr) \bigl(\sin(2x) \bigr), \end{align*} and \[ \bigl(\cos (2t) \bigr) \bigl(\sin(2\times 0) \bigr) = \bigl(\cos (2t) \bigr) \bigl(\sin(2 \pi) \bigr) = 0. \]
Animating the function \[ \bigl(\cos (2t) \bigr) \bigl(\sin(2x) \bigr) \] in time we obtain the animation below, which really resembles vibrations of a string. In the animation below we let \(t\) run from \(0\) to \(2\pi\). Since \(\cos t\) is periodic, the animation appears to continue infinitely.
Applying the above theorems one can calculate the coefficients of the Fourier series for functions whose derivatives and integrals replicate itself. For example for function $e^x$ or $\cosh(x),$ or a periodic extension of $\sin(x)$ restricted to $(-\pi/2, \pi/2)$, or a similar function with a clear pattern in derivatives.
We can apply the first differentiation theorem to calculate the coefficients of the Fourier series of the function $\exp(x)$ say on the interval $[-1,1]$. The constant coefficient is \[ a_0 = \sinh(1). \] Since $\exp(x)$ is its own derivative we have the following equalities: \begin{equation*} a_k = 2 (-1)^{k} \sinh(1) + k\pi \, b_k, \quad b_k = - k\pi \, a_k \quad \text{for all} \quad k\in\mathbb{N}. \end{equation*} Therefore, substituting the expression for $b_k$ into the first equation and solving for $a_k$ we get \[ a_k = \frac{2 (-1)^k \sinh(1)}{1+(k \pi)^2}, \quad b_k = -\frac{2 (-1)^k k \pi \sinh(1) }{1+(k \pi)^2} \quad \text{for all} \quad k\in\mathbb{N}. \] We can confirm this by plotting the Fourier periodic extension of $\exp(x)$ on $[-1,1]$ and its approximation by its Fourier series in a small Mathematica notebook.
Yesterday we calculated the Fourier Series for the function \( f(x) = x\) with \(x\in [-1,1) \). The Fourier series is given by the formula below, and it converges to the Fourier periodic extension of the function \(f\): \[ \tilde{f}_{\!\operatorname{Fourier}}(x) = \frac{2}{\pi} \sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} \sin(\pi k x). \] We illustrated the relationship between partial sums of this Fourier series and the Fourier periodic extension of $f$ in the plots below.
The plots below, represent the Fourier periodic extension of $f$ in navy blue and different partial sums of the Fourier series in red.
Click on the image below to cycle through different versions.
The plots below, represent the Fourier periodic extension of $v$ in navy blue and different partial sums of its Fourier series in red.
Click on the image below to cycle through different versions.
Definition. Let $L \gt 0$ and let $f:[-L, L] \to \mathbb R$ be a piecewise continuous function. The series \[ a_0 + \sum_{k=1}^{+\infty} \biggl( a_k \cos\Bigl(\!\frac{k \pi}{L} x\!\Bigr) + b_k \sin\Bigl(\!\frac{k \pi}{L} x\!\Bigr) \biggr) \] where \[ a_0 = \frac{1}{2L} \int_{-L}^L f(\xi) d\xi \] and, for $k \in {\mathbb N}$, \begin{align*} a_k &= \frac{1}{L}\int_{-L}^L f(\xi) \cos\Bigl(\!\frac{k \pi}{L} \xi\!\Bigr) d\xi, \\ b_k &= \frac{1}{L}\int_{-L}^L f(\xi) \sin\Bigl(\!\frac{k \pi}{L} \xi\!\Bigr) d\xi, \end{align*} is called the Fourier series of $f$.
Definition of Fourier Periodic Extension. Let $L \gt 0$ and \[ f:[-L, L] \to \mathbb R \] be a piecewise continuous function. Then the Periodic Extension of \(f:[-L,L) \to \mathbb{R}\) is the following function defined for all \(x\in \mathbb{R}\) by \[ \widetilde{\mkern3mu f\mkern 1mu}(x) = f\biggl(x - 2 L \Bigl\lfloor \frac{x+L}{2L}\Bigr\rfloor \biggr). \] The Fourier periodic extension of $f$ is the following function defined for all \(x\in \mathbb{R}\) by \[ \tilde{f}_{\!\operatorname{Fourier}}(x) = \begin{cases} \tilde{f}(x) & \text{if $\tilde{f}$ is continuous at $x$} \\[10pt] \dfrac{1}{2}\!\bigl(\tilde{f}(x^-)+\tilde{f}(x^+)\bigr) & \text{if $\tilde{f}$ is not continuous at $x$} \end{cases} \] where \[ \tilde{f}(x^-) = \lim_{\xi \uparrow x} \tilde{f}(\xi) \quad \text{left-hand limit} \] and \[ \tilde{f}(x^+) = \lim_{\xi \downarrow x} \tilde{f}(\xi) \quad \text{right-hand limit}. \]
In the rest of today's post we will study the Fourier series of the following three functions:
![]() |
![]() |
![]() |
Function: \( f(x) = x\) with \(x\in [-1,1) \) | Function: \( g(x) = |x|\) with \(x\in [-1,1) \) | Function: \( h(x) = \operatorname{ReLU}(x) \) with \(x\in [-1,1) \) |
Click on the image below to cycle through different versions.
Click on the image below to cycle through different versions.
Click on the image below to cycle through different versions.
I do understand that the last two definitions might look somewhat weird. The only reason for that is that the ceiling function and the floor function are almost completely absent from our curriculum. That is a fault of our curriculum.
The floor function is defined as follows: For $x \in \mathbb{R}$ we set \[ \lfloor x \rfloor = \max \bigl\{ k \in \mathbb{Z} : k \leq x \bigr\}. \] In words: The floor function of a real number $x$ is defined as the largest integer less than or equal to $x$. This means it rounds $x$ down to the nearest integer. For example, $\lfloor \pi \rfloor = 3$, $\lfloor -e \rfloor = -3$.
The ceiling function is defined as follows: For $x \in \mathbb{R}$ we set \[ \lceil x \rceil = \min \bigl\{ k \in \mathbb{Z} : x \leq k \bigr\}. \] In words: The ceiling function of a real number $x$ is the smallest integer greater than or equal to $x$. This means it rounds $x$ up to the nearest integer. For example, $\lceil \pi \rceil = 4$, $\lceil -e \rceil = -2$.
The book gives a descriptive definition in English of the concept of a periodic extension. The above formula involving the ceiling and floor function is the only way that I was able to translate the definition from English into Mathish. The figures below illustrate the definition with some simple functions $f$. Here is the Mathematica notebook which I used to produce these figures.
In the figure below the function $f$ is the restriction of the function $x \mapsto x$ (in blue) to the interval $[1,4)$. The red function is the periodic extension.
In the figure below the function $f$ is the restriction of the function $x \mapsto x^2-2$ (in blue) to the interval $[-2,2)$. The red function is the periodic extension.
In the figure below the function $f$ is the restriction of the function $x \mapsto \cos(x)$ (in blue) to the interval $[0,\pi)$. The red function is the periodic extension.
I do understand that the last two definitions might look somewhat weird. The only reason for that is that the ceiling function and the floor function are almost completely absent from our curriculum. That is a fault of our curriculum.
The floor function is defined as follows: For $x \in \mathbb{R}$ we set \[ \lfloor x \rfloor = \max \bigl\{ k \in \mathbb{Z} : k \leq x \bigr\}. \] In words: The floor function of a real number $x$ is defined as the largest integer less than or equal to $x$. This means it rounds $x$ down to the nearest integer. For example, $\lfloor \pi \rfloor = 3$, $\lfloor -e \rfloor = -3$.
The ceiling function is defined as follows: For $x \in \mathbb{R}$ we set \[ \lceil x \rceil = \min \bigl\{ k \in \mathbb{Z} : x \leq k \bigr\}. \] In words: The ceiling function of a real number $x$ is the smallest integer greater than or equal to $x$. This means it rounds $x$ up to the nearest integer. For example, $\lceil \pi \rceil = 4$, $\lceil -e \rceil = -2$.
The book gives a descriptive definition in English of the concept of a periodic extension. The above formula involving the ceiling and floor function is the only way that I was able to translate the definition from English into Mathish. The figures below illustrate the definition with some simple functions $f$. Here is the Mathematica notebook which I used to produce these figures.
In the figure below the function $f$ is the restriction of the function $x \mapsto x$ (in blue) to the interval $[1,4)$. The red function is the periodic extension.
In the figure below the function $f$ is the restriction of the function $x \mapsto x^2-2$ (in blue) to the interval $[-2,2)$. The red function is the periodic extension.
In the figure below the function $f$ is the restriction of the function $x \mapsto \cos(x)$ (in blue) to the interval $[0,\pi)$. The red function is the periodic extension.
MyOptionsD = Sequence[ImageResolution -> 600, Axes -> True, AxesLabel -> {None, None, None}, AxesOrigin -> {Automatic, Automatic, Automatic}, Boxed -> True, DisplayFunction -> Identity, FaceGrids -> None, FaceGridsStyle -> Automatic, ImageSize -> 500, Lighting -> {{"Ambient", White}}, BoundaryStyle -> None, Method -> {"DefaultGraphicsInteraction" -> {"Version" -> 1.2`, "TrackMousePosition" -> {True, False}, "Effects" -> {"Highlight" -> {"ratio" -> 2}, "HighlightPoint" -> {"ratio" -> 2}, "Droplines" -> {"freeformCursorMode" -> True, "placement" -> {"x" -> "All", "y" -> "None"}}}}, "RotationControl" -> "Globe"}, PlotRangePadding -> {Scaled[0.02`], Scaled[0.02`], Scaled[0.02`]}, Ticks -> {Automatic, Automatic, Automatic}, ViewPoint -> {-1.0311155006393165`, 2.874130096265292`, 1.4581416303238175`}, ViewVertical -> {0.15265090977010184`, -0.40303458278204485`, 0.9023640201316004`}]; fbth[\[Theta]_] = 2 Cos[\[Theta]/2]^6; gtfbth = ParametricPlot3D[{Cos[\[Theta]], Sin[\[Theta]], fbth[\[Theta]]}, {\[Theta], -Pi, Pi}, PlotStyle -> {RGBColor[0, 0.6, 0], Thickness[0.01]}, PlotPoints -> {100}]; disks = Graphics3D[ { {FaceForm[RGBColor[0.75, 0.75, 0.75]], Opacity[0.25], Polygon[{Cos[#], Sin[#], 0} & /@ Range[-Pi, Pi, Pi/64]]}, {GrayLevel[0.5], Thickness[0.003], Line[{Cos[#], Sin[#], 0} & /@ Range[-Pi, Pi, Pi/64]]} }, Lighting -> {{"Ambient", White}} ]; EqSolDiskBC = Show[{gtfbth, disks}, PlotRange -> {{-1.05, 1.05}, {-1.05, 1.05}, {-.05, 2.05}}, BoxRatios -> {1, 1, 1}, MyOptionsD]The Mathematica code for the right picture above (it uses some definitions in the preceding command):
Clear[wwSd]; wwSd[r_, \[Theta]_] := 5/8 + 15 /16 r Cos[\[Theta]] + 3/8 r^2 Cos[2 \[Theta]] + 1/16 r^3 Cos[3 \[Theta]]; soluD = ParametricPlot3D[{r Cos[\[Theta]], r Sin[\[Theta]], wwSd[r, \[Theta]]}, {r, 0, 1}, {\[Theta], -Pi, Pi}, PlotStyle -> {Opacity[0.7]}, PlotPoints -> {30, 100}, Mesh -> {10, 50}, MeshStyle -> {{Thickness[0.002], RGBColor[0, 0.5, 0.5], Opacity[0.5]}, {Thickness[0.002], RGBColor[0, 0.5, 0.5], Opacity[0.5]}}, BoundaryStyle -> Directive[{Thickness[0.002], RGBColor[0, 0.5, 0.5], Opacity[0.5]}]]; EqSolDisk = Show[{soluD, gtfbth, disks}, PlotRange -> {{-1.05, 1.05}, {-1.05, 1.05}, {-.05, 2.05}}, BoxRatios -> {1, 1, 1}, MyOptionsD]
Let $P$ and $Q$ be continuous real valued functions defined on $\mathbb{R}$. Let $Y_1 : \mathbb{R} \rightarrow \mathbb{R}$ and $Y_2: \mathbb{R} \rightarrow \mathbb{R}$ be linearly independent solutions of the homogeneous linear equation (HLE) \begin{equation*} Y^{\prime\prime}(x)+P(x) Y^{\prime}(x)+Q(x)Y(x)= 0, \ x \in \mathbb{R}. \end{equation*} Then all solutions of the HLE are given by the formula \[ Y(x) = c_1 Y_1(x) + c_2 Y_2(x), \ \ x \in \mathbb{R}, \] where $c_1$ and $c_2$ are arbitrary constants.
The solution \[ Y(x) = c_1 Y_1(x) + c_2 Y_2(x), \ \ x \in \mathbb{R}, \] is called the general solution of the HLE.
A pair of linearly independent solutions of the HLE is called a fundamental set of solutions of the HLE.
Below, we always have $x \in \mathbb{R}$.
Therefore, $\cos(x)$ and $\sin(x)$ satisfy the following linear homogeneous second-order differential equation with constant coefficients \[ Y''(x) + Y(x) = 0. \] Since the preceding differential equation is linear homogeneous equation, any linear combination of $\cos(x)$ and $\sin(x)$ is also a solution. In fact, all solutions of the preceding differential equation are given by the following expression \[ c_1 \cos(x) + c_2 \sin(x), \quad \text{where} \quad c_1, c_2 \in \mathbb{R}. \] The preceding expression is called the general solution of $Y''(x) + Y(x) = 0.$
There is something special about $\cos(x)$ and $\sin(x)$ that I must mention here. Look at the values of $\cos(x)$ and $\sin(x)$ and its derivatives at $0$: \[ \begin{array}{c} \Big.\bigl(\cos(x)\bigr)\Big|_{x=0} = 1 \\ \Big.\bigl(\cos(x)\bigr)'\Big|_{x=0} = 0 \end{array} \qquad \text{and} \qquad \begin{array}{c} \Big.\bigl(\sin(x)\bigr)\Big|_{x=0} = 0 \\ \Big.\bigl(\sin(x)\bigr)'\Big|_{x=0} = 1 \end{array} \]
Therefore, $\cos(\mu x)$ and $\sin(\mu x)$ satisfy the following linear homogeneous second-order differential equation with constant coefficients \[ Y''(x) + \mu^2 Y(x) = 0. \] This equation can be written as an "eigenvalue equation" \[ -Y''(x) = \mu^2 Y(x) \] Since the preceding differential equation is linear homogeneous equation, any linear combination of $\cos(\mu x)$ and $\sin(\mu x)$ is also a solution. In fact, all solutions of the preceding differential equation are given by the following expression \[ c_1 \cos(\mu x) + c_2 \sin(\mu x), \quad \text{where} \quad c_1, c_2 \in \mathbb{R}. \] The preceding expression is called the general solution of $-Y''(x) = \mu^2 Y(x).$
Therefore, $\cosh(\mu x)$ and $\sinh(\mu x)$ satisfy the following linear homogeneous second-order differential equation with constant coefficients \[ Y''(x) - \mu^2 Y(x) = 0. \] This equation can be written as an "eigenvalue equation" \[ Y''(x) = \mu^2 Y(x) \] Since the preceding differential equation is linear homogeneous equation, any linear combination of $\cosh(\mu x)$ and $\sinh(\mu x)$ is also a solution. In fact, all solutions of the preceding differential equation are given by the following expression \[ c_1 \cosh(\mu x) + c_2 \sinh(\mu x), \quad \text{where} \quad c_1, c_2 \in \mathbb{R}. \] The preceding expression is called the general solution of $Y''(x) = \mu^2 Y(x).$
This equation is a second order linear homogeneous equation that you studied in an Ordinary Differential Equations class. In fact this is, so called mass-spring equation, see Wikipedia's page Simple Harmonic Motion and the snippet below.
Thus, the general solution of the second-order linear homogeneous equation in the last orange box is \[ A(x) = \color{#FF0000}{c_1} \cos(\color{#FF0000}{\mu}\mkern 1mu x) + \color{#FF0000}{c_2} \sin(\color{#FF0000}{\mu}\mkern 1mu x). \] The remarkable feature of the above equation for \(A(x)\) is that the redness of \(A(x)\) has been replaced by the redness of the coefficients \(\color{#FF0000}{c_1}\) and \(\color{#FF0000}{c_2}\) and the redness of \(\color{#FF0000}{\mu}\); all being real numbers, this is much better than seeking an unknown function.
Although we did not finish the process until the end, I will state the final solutions for $A(x)$ and $B(t)$: \begin{align*} B(t) & = \exp\mkern-4mu\left(\!-\frac{m^2 \pi^2}{L^2} \kappa\, t \right), \\ A(x) & = \sin\mkern-4mu\left(\frac{m \pi}{L} x\right), \end{align*} where $\kappa$ and $L$ are given positive real numbers and $m$ is any positive integer.
Thus, we have obtained the sequence of solutions for $u(x,t)$ given by \[ u(x,t) = \exp\mkern-4mu\left(\!-\frac{m^2 \pi^2}{L^2} \kappa\, t \right) \sin\mkern-4mu\left(\frac{m \pi}{L} x\right) \quad \text{where} \quad m \in \mathbb{N}. \] We use $\mathbb{N}$ to denote the set of all positive integers. A good exercise is to verify that the preceding formula for $u(x,t)$ indeed solves the heat equation with the Dirichlet boundary conditions, the problem stated above.
The webpage The Laplacian in Polar Coordinates contains a derivation of the Laplacian in polar coordinates: \[ \bigl(\nabla^2 w \bigr) (r,\theta) = \frac{1}{r^2} \frac{\partial^2 w}{\partial \theta^2}(r,\theta) + \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial w}{\partial r}(r,\theta) \right). \] Here $w(r,\theta)$ is a function of $r \in (0,+\infty)$ and $\theta \in [0, 2\pi)$.
For example, consider the following function in the orthogonal coordinate system $xy$, \[ u(x,y) = x y\bigl(x^2 + y^2\bigr), \quad x,y \in \mathbb{R}. \] One can think of this function as giving a temperature at each point in the $xy$-plane. So, at the point $P$ whose coordinates in $xy$ orthogonal coordinate system are $(2,2)$ the temperature is $32.$ The same temperature in polar coordinates is given by the function \[ w(r,\theta) = r^4 (\cos \theta)(\sin \theta), \quad r \in [0,+\infty), \quad \theta \in [0, 2 \pi). \] The same point $P$ from above has coordinates $r = 2\sqrt{2},$ $\theta = \pi/4.$ So, the temperature at the point $P$ is \[ w\bigl(2\sqrt{2}, \pi/4 \bigr) = 2^4 2^2 (\sqrt{2}/2) (\sqrt{2}/2) = 32. \] The Laplacian of the function $u(x,y)$ is \[ (\nabla^2 u)(x,y) = \frac{\partial^2 u}{\partial x^2}(x,y) + \frac{\partial^2 u}{\partial y^2}(x,y) = 12 x y. \] The Laplacian of the function $w(r,\theta)$ is \[ (\nabla^2 w)(r,\theta) = \frac{1}{r^2} \frac{\partial^2 w}{\partial \theta^2}(r,\theta) + \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial w}{\partial r}(r,\theta) \right) = 12 r^2 (\cos \theta)(\sin \theta). \] As expected, we have \[ 12xy \quad \text{in polar coordinates is} \quad 12 r^2 (\cos \theta)(\sin \theta). \]
Part ii. The internal sources in the annulus are given by $Q(r) = r$ where $1 \leq r \leq b$. The units of this function are $\displaystyle \frac{\operatorname{cal}}{\operatorname{cm}^3 \operatorname{sec}}.$ To get the total heat energy generated we need to integrate the function $Q(r)$ over the annulus. Since the height of the annulus is $1$, the total heat energy generated in the annulus is \[ \int_{0}^{2\pi} \int_1^b r \mkern 1mu r \mkern 1mu dr \mkern 1mu d\theta = \frac{2}{3}\bigl(b^3 - 1\bigr)\mkern 1mu \pi. \]
The given boundary conditions tell us about the heat energy that leaves the annulus. Based on Fourier's Law the flux along the inner circle is $-5$ in units $\frac{\operatorname{cal}}{\operatorname{cm}^2 \operatorname{sec}}.$ To get the total heat that leaves the annulus along the inner circle we multiply $5$ with the surface area $2\pi$, getting $10\pi$. The flux along the outer circle is $4$ in units $\frac{\operatorname{cal}}{\operatorname{cm}^2 \operatorname{sec}}.$ To get the total heat that leaves the annulus along the outer circle we multiply $4$ with the surface area $2b\pi$, getting $8 b \pi$. Thus the total amount of the heat energy that leaves the annulus is $10\pi+8 b \pi$.
For the annulus to have the equilibrium temperature we must have the heat energy generated in the annulus equal to the heat energy leaving the annulus. That is: \[ \frac{2}{3}\bigl(b^3 - 1\bigr)\mkern 1mu \pi = 10\pi+8 b \pi. \] The last equation simplifies to \[ b^3 - 12 b - 16 = 0, \] the same equation we obtained in i.
I thought often about this kind of questions. This is a summary of my thinking.
In the problem, I asked you to present your best argument. Perhaps unfairly, I left the decision to you. In hindsight, I should have also asked you to evaluate the strong and weak points of your argument.
As for omitting calculations: it is often done—I do it myself. However, whenever I do, I feel that I am shortchanging both myself and my reader. So why do we omit them? Often, it's a matter of managing time and resources effectively. Still, I personally see value in presenting more rather than less, which is one of the motivations behind writing my website.
As students, I encourage you to assess our presentation of content objectively and to challenge us more when we omit too much.
In conclusion, this is a personal decision based on individual objectives and priorities.
To help you with Problem 3 from Assignment 1 I included the following image of over 130 characteristics of the Burgers' Equation with the assigned initial condition.
Working on the above image inspired me to turn it into a piece of Art:
Click on the image below to cycle through different versions.
I have posted pictures of the solution to Problem 1 from Assignment 1 below. These images may help you identify a special function that can simplify the solution and make it more convenient to express.
I am sharing these pictures not only because they might be useful but also because I find them aesthetically pleasing and want to share their beauty with you.
Click on the image below to cycle through different versions.
In Problem 2 from Assignment 1, I ask you to illustrate solutions that you found with Wolfram Mathematica plots. Solutions of Problem 2 are piecewise defined continuous functions of two variables.
Below I present plots of two different piecewise defined continuous functions of two variables created in Wolfram Mathematica and the corresponding code.
The first function is \[ f(x, y) = \begin{cases} x^2 + y^2, & \text{if} \mkern 15mu x^2 + y^2 \leq 1, \\[5pt] \sqrt{x^2 + y^2}, & \text{if} \mkern 15mu x^2 + y^2 \gt 1. \end{cases} \]
At the points inside the unit circle in \(xy\)-plane, the function is defined to be the rotated paraboloid \(z = x^2 + y^2\). Outside the unit circle, the functin is defined to be the rotated cone \(z = \sqrt{x^2 + y^2}\). The function is continuous, but not differentiable along the unit circle at level \(z=1\). You can see that the function is not differentiable by looking at the plot, and prove it formally, using what you learned in multivariable calculus class.
Notice that in the above picture, I placed the \(z\)-axes to point downwards. The code that I used to produce the above picture is as follows.
Clear[ff]; ff[x_, y_] := Piecewise[{{x^2 + y^2, x^2 + y^2 <= 1}, {Sqrt[x^2 + y^2], x^2 + y^2 > 1}}]; plotSampleff = Plot3D[ff[x, y], {x, -2, 2}, {y, -2, 2}, PlotRange -> All, AxesLabel -> {"x", "y", "z"}, PlotPoints -> 200, Mesh -> None, Exclusions -> None, BoxRatios -> {1, 1, 1}, ImageSize -> 600, ViewPoint -> {-2.85169, -1.46541, -1.08185}, ViewVertical -> {0.284368, 0.14613, -0.947513}]
You can copy and paste this code directly into your Mathematica notebook.
The second function is \[ g(x, y) = \begin{cases} x^2 + y^2, & \text{if} \mkern 15mu x^2 + y^2 \leq 1, \\[5pt] 2 \sqrt{x^2 + y^2} - 1, & \text{if} \mkern 15mu x^2 + y^2 \gt 1. \end{cases} \]
At the points inside the unit circle in \(xy\)-plane, the function is defined to be the rotated paraboloid \(z = x^2 + y^2\). Outside the unit circle, the function is defined to be the rotated cone \(z = 2\sqrt{x^2 + y^2} - 1\). I selected this cone since it is tangent to the paraboloid along the unit circle at the level \(z=1\). The function is continuous and differentiable on \(\mathbb{R}^2\). You can prove that the function is differentiable by looking at the plot, and you can prove it formally, using what you learned in a multivariable calculus class.
Notice that in the above picture, I placed the \(z\)-axes to point downwards. The code that I used to produce the above picture is as follows.
Clear[gg]; gg[x_, y_] := Piecewise[{{x^2 + y^2, x^2 + y^2 <= 1}, {2 Sqrt[x^2 + y^2] - 1, x^2 + y^2 > 1}}]; plotSamplegg = Plot3D[gg[x, y], {x, -2, 2}, {y, -2, 2}, PlotRange -> All, AxesLabel -> {"x", "y", "z"}, PlotPoints -> 200, Mesh -> None, Exclusions -> None, BoxRatios -> {1, 1, 1}, ImageSize -> 600, ViewPoint -> {-2.8517, -1.4654, -1.0818}, ViewVertical -> {0.2844, 0.1461, -0.9475}]
You can copy and paste this code directly into your Mathematica notebook.
Since the function \(g(x,y)\) is differentiable, in the picture above one cannot see that the function is defined piecewise. Therefore, I decided to make a picture with the unit circle at the level \(z=1\) emphasized.
The code that I used to produce the above picture is as follows.
plotSampleggc = Show[plotSamplegg, ParametricPlot3D[{Cos[t], Sin[t], 1}, {t, 0, 2 Pi}, PlotStyle -> {RGBColor[0, 0.5, 0], Thickness[0.01]}, PlotPoints -> 200], AxesLabel -> {"x", "y", "z"}, BoxRatios -> {1, 1, 1}, ImageSize -> 600, ViewPoint -> {-2.8517, -1.4654, -1.0818}, ViewVertical -> {0.2844, 0.1461, -0.9475}]
You can copy and paste this code directly into your Mathematica notebook.
When exporting graphics, the first thing to be aware of is where on your computer the graphics will be exported. Mathematica provides several commands for checking that.
First, it is always good to know where is your Mathematica notebook located on your computer. That is done by executing
NotebookDirectory[]
Sometimes it even good to know what is the name of your Mathematica notebook
NotebookFileName[]
You can ask Mathematica where, by default, it exports files by executing
Directory[]
To tell Mathematica to exports files in the same directory where your current Mathematica notebook resides execute
SetDirectory[NotebookDirectory[]]
Instead of NotebookDirectory[] you can put any specific directory, like
SetDirectory["C:\\Dropbox"]
With two dimensional graphics exporting graphics is easy. When you create the graphics in Mathematica name it, for example myplot. Then, to export this graphics as a PNG file use
Export["myplot.png", myplot]
You can use different options in Export[]. I often like to control the size and the resolution of the exported picture:
Export["myplot.png", myplot, ImageSize -> 600, ImageResolution -> 600]
Instead of PNG, one can use JPEG, GIF, EPS, SVG, PDF.
When exporting three-dimensional graphics, the simple form of the Export[] command may not preserve the exact orientation and appearance of the graphics as displayed in Mathematica. Below is a reliable workaround to address this issue.
First, create and name the graphics as shown in the example below:
Clear[ff]; ff[x_, y_] := Piecewise[{{x^2 + y^2, x^2 + y^2 <= 1}, {Sqrt[x^2 + y^2], x^2 + y^2 > 1}}]; plotSampleff = Plot3D[ff[x, y], {x, -2, 2}, {y, -2, 2}, PlotRange -> All, AxesLabel -> {"x", "y", "z"}, PlotPoints -> 200, Mesh -> None, Exclusions -> None, BoxRatios -> {1, 1, 1}, ImageSize -> 600, ViewPoint -> {-2.85169, -1.46541, -1.08185}, ViewVertical -> {0.284368, 0.14613, -0.947513}]
Next, extract the Options from the created graphics. To do this:
Assign the resulting options a name (I like using opts) for later use, like in the picture below:
Finally, use the Export[] command with Show[] to ensure all options are preserved:
Export["plotSampleff.png", Show[plotSampleff, opts], ImageSize -> 600, ImageResolution -> 600]
This approach ensures that the exported graphics retain the same orientation, viewpoint, and appearance as in the Mathematica notebook.
These formulas hide the solution \(u(x,y)\).
Recall how we obtained the function \(Z(s)\), now \(Z(s,\xi)\): \[ Z(s,\xi) = u\bigl(X(s,\xi),Y(s,\xi)\bigr). \] In our case \[ u\left(\tfrac{\xi }{1-\xi s},s\right) = f(\xi). \] But we need the solution in terms of the coordinates \(x\) and \(y\). So, given a point \((x,y)\), we ask ourselves to calculate \(s\) and \(\xi\) that correspond to that point. We solve \[ x = \frac{\xi }{1-\xi s}, \quad y = s, \] for \(s\) and \(\xi\). The solution is \[ s = y, \quad \xi = \frac{x}{x y+1}. \] Hence, at the point \((x,y)\), the value of the solution \[ u(x,y) = f\left(\frac{x}{x y+1} \right). \]
Recall the post of Saturday, January 11, 2025. We introduced the unknown function \(u(x,t)\) as follows \[ u(x,t) = \biggl({\huge e}^{^{\Large t \mkern 0.5mu \frac{d}{dx}}}\biggr) f(x), \quad \normalsize \text{where} \quad \large x, t \in \mathbb{R}. \] For this function we established that it satisfies the initial value problem for the following partial differential equation: \[ \frac{\partial u}{\partial t}(x,t) = \frac{\partial u}{\partial x}(x,t), \qquad u(x,0) = f(x). \] Here, instead of the variable \(y\), we use the variable \(t\). The reason for this is that in the content of Saturday's post this variable stands for time. Sometimes the equations studied in Saturday's post are called evolution equations; meaning evolution in time.
However, the names of variables are interchangeable. Therefore, the equation that we solved today gives us the solution of the equation that we established on Saturday.
The solution obtained today gives us the function \(u(x,t)\) that we introduced on Saturday: \[ u(x,t) = f(x+t). \]
Combining the last formula with the meaning of \(u(x,t)\) from Saturday: \[ u(x,t) = \biggl({\huge e}^{^{\Large t \mkern 0.5mu \frac{d}{dx}}}\biggr) f(x), \quad \normalsize \text{where} \quad \large x, t \in \mathbb{R}. \] we have the formula for the exponential function with \(t \frac{d}{dx}\) in the exponent: \[ \biggl({\huge e}^{^{\Large t \mkern 0.5mu \frac{d}{dx}}}\biggr) f(x) = f(x+t), \quad \text{where} \quad \large x, t \in \mathbb{R}. \] Thus, for each \(t\in\mathbb{R}\) the transformation \(\displaystyle {\huge e}^{^{\Large t \mkern 0.5mu \frac{d}{dx}}}\) acts on a function \(f(x)\) by shifting it's graph horizontally. The shift is to the left if \(t \gt 0\) and the shift is to the right if \(t \lt 0\).
I say "you can verify that". I need to disclose that this is quite a bit of work. For example, to verify \[ {\Huge e}^{^{ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} {\Large t}}} = \begin{bmatrix} \cos(t) & -\sin(t) \\[5pt]\sin(t) & \cos(t) \end{bmatrix}, \] we need to verify that \[ \begin{bmatrix} \cos(0) & -\sin(0) \\[5pt]\sin(0) & \cos(0) \end{bmatrix} = \begin{bmatrix} 1 & 0 \\[5pt] 0 & 1 \end{bmatrix}. \] Ok, this is straightforward from the definitions of trigonometric functions. And we need to verify \[ \frac{d}{dt} \begin{bmatrix} \cos(t) & -\sin(t) \\[5pt]\sin(t) & \cos(t) \end{bmatrix} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} \cos(t) & -\sin(t) \\[5pt]\sin(t) & \cos(t) \end{bmatrix}, \] which is an exercise in trigonometric derivatives and matrix multiplication.
Place the cursor over the image to see the diffusion of the dye.
You:
Can you please write a complete LaTeX file with instructions on using basic mathematical operations, like fractions, sums, integrals, basic functions, like cosine, sine, and exponential function, and how to structure a document and similar features? Please explain the difference between the inline and displayed mathematical formulas. Please include examples of different ways of formatting displayed mathematical formulas. Please include what you think would be useful to a mathematics student. Also, can you please include your favorite somewhat complicated mathematical formula as an example of the power of LaTeX? I emphasize I want a complete file that I can copy into the LaTeX compiler and compile into a pdf file. Please ensure that your document contains the code for the formulas you are writing, which displays both as code separately from compiled formulas. Also, please double-check that your code compiles correctly. Remember that I am a beginner and cannot fix the errors. Please act as a concerned teacher would do.
This is the LaTeX document that ChatGPT produced base on the above prompt. Here is the compiled PDF document.
You can ask ChatGPT for specific LaTeX advise. To get a good response, think carefully about your prompt. Also, you can offer to ChatGPT a sample of short mathematical writing from the web or a book as a PNG file and it convert that writing to LaTeX. You can even try with neat handwriting. The results will of course depend on the clarity of the file, ChatGPT makes mistakes, but I found it incredibly useful.