Today we talked about how to approximate the solution of a first order differential equation \[\frac{dy}{dt} = f(t,y)\] given an initial condition \(y(a) = y_0\) and an interval \([a,b]\). You can use Euler’s method which is the following algorithm:
If \(\displaystyle \frac{dy}{dt} = f(t,y)\), then \(\Delta y \approx f(t,y) \Delta t\). If you fix \(\Delta t = h\), then you have the following algorithm:
We coded this algorithm in Python and used it to approximate the solution to the differential equation
We talked more about the Initial Value Problem (abbreviated IVP) for first order differential equations (abbreviated ODEs). We talked about this theorem:
If \(f(t,y)\) is continuous and the partial derivatives \(|\frac{\partial f}{\partial y}|\) are bounded for all \(t \in [a,b]\), then the initial value problem \[\frac{dy}{dt} = f(t,y) ~~~~ y = y_0\] has a unique solution \(y(t)\) defined on the interval \([a,b]\).
We looked at these examples:
The differential equation \(\dfrac{dy}{dt} = \dfrac{-t}{y}\) has solutions that follow circular arcs centered at the origin. The IVP \(y(-1) = 0\) has two solutions \(y = \sqrt{1-t^2}\) and \(y = -\sqrt{1-t^2}\). Why doesn’t that contradict the theorem above? Which condition of the theorem fails for this IVP?
Show that \(y = 2e^{5x}\) is the unique solution of the IVP \(y' = 5y\) with \(y(0) = 2\). How do you know that this is the only solution?
The explosion equation is the ODE \[\frac{dy}{dt} = k y^2.\] The solution is \(y(t) = \frac{1}{C-kt}\). If \(k = 1\), then a solution to the IVP with \(y(0) = 1\) is \(y(t) = \dfrac{1}{1-t}\). But that solution is not defined when \(t=1\) (it has a vertical asymptote). Why is that consistent with the theorem above?
For our last example, we looked at a model of a population growing, but with a seasonal limit on the population size. The model for the population \(y\) was \[\frac{dy}{dt} = \alpha y (\beta + \gamma \cos(2 \pi t) - y).\] Explain why the initial value problem has a unique solution with any initial condition.
To analyze the population example we used this Python code:
import numpy as np
import matplotlib.pyplot as plt
def EulersMethod(f,a,b,n,y0):
# Returns two lists, one of t-values and the other of y-values.
= (b-a)/n
h = a, y0
t, y = [], []
tvals, yvals for i in range(n+1):
tvals.append(t)
yvals.append(y)+= f(t,y)*h
y += h
t return tvals, yvals
= 0.5, 1, 0.1
alpha, beta, gamma = lambda t,y: alpha*y*(beta+gamma*np.cos(2*np.pi*t)-y)
f
= EulersMethod(f,0,10,1000,0.1)
tvals, yvals plt.plot(tvals,yvals)
We started today with this question:
Once we figured that out, we looked at how the iterates \(w_i\) generated by Euler’s method are different from the actual \(y\) values of the true solution. At each step, we have the correct y-value \(y(t_i)\) and the Euler’s method approximation. The gap between the two is the error \(E_i\). If we know \(E_i\), we can estimate how much worse the error after the next step will be.
The next step error \(E_{i+1}\) is the gap between \(y(t_{i+1})\) and \(w_{i+1}\), where \[y(t_{i+1}) = y(t_i) + y'(t_i) h + \tfrac{1}{2}y''(\xi) h^2\] and \[w_{i+1} = w_{i} + f(t_i,w_i)h + \delta_i\] where \(\delta_i\) is a term that represents any rounding error in computing \(w_{i+1}\). We will assume that all of the rounding error at each step is bounded by a single constant \(\delta\) (that is usually very small).
Estimating the gap, we get \[\begin{align*}E_{i+1} = |y_{i+1} - w_{i+1}| &\le E_i + Lh E_i + \tfrac{1}{2} M h^2 + \delta.\\ &\le E_i r + d \end{align*}\] where \(r = 1+Lh\) and \(c = \tfrac{1}{2} M h^2 + \delta\). Now there is a nice simple pattern for how the error grows. Initially \(E_0 = 0\). Then \[E_1 \le d\] \[E_2 \le rc + c\] \[E_3 \le r^2c + rc + c\] \[\vdots\] \[E_n \le r^{n-1} c + \ldots + rc + c.\] This is a geometric sum, so there is a formula: \[\begin{align*}E_n \le \left( \frac{r^n - 1}{r-1} \right) c &= \frac{[(1+L\frac{b-a}{n})^n - 1]}{L} \left(\frac{Mh}{2} + \frac{\delta}{h}\right). \\ &\le \frac{[e^{L(b-a)} - 1]}{L} \left(\frac{Mh}{2} + \frac{\delta}{h}\right). \end{align*}\]
The main take-away from this long calculation is that there is a limit to how accurate you can make Euler’s method. By making \(h\) smaller, the error gets smaller at first, but eventually the accumulated round-off error from taking so many steps gets large. It turns out that the minimum worst case error occurs when \[h = \sqrt{\frac{2\delta}{M}}.\]