Math 342 - Week 13 Notes

Mon, Apr 18

Today we talked about how to approximate the solution of a first order differential equation \[\frac{dy}{dt} = f(t,y)\] given an initial condition \(y(a) = y_0\) and an interval \([a,b]\). You can use Euler’s method which is the following algorithm:

Euler’s Method

If \(\displaystyle \frac{dy}{dt} = f(t,y)\), then \(\Delta y \approx f(t,y) \Delta t\). If you fix \(\Delta t = h\), then you have the following algorithm:

Algorithm

  1. Choose a number of steps \(n\) and compute \(h = \frac{b-a}{n}\).
  2. Initialize
    1. \(t = a\) and
    2. \(y = y_0\).
  3. For \(i\) from 1 to \(n\) do:
    1. Add \(f(t,y) \cdot h\) to \(y\)
    2. Add \(h\) to \(t\)

We coded this algorithm in Python and used it to approximate the solution to the differential equation

  1. \(\displaystyle\frac{dy}{dt} = y - t^2 + 1\) on \([0,2]\) with initial condition \(y(0) = 0.5\).

Wed, April 20

We talked more about the Initial Value Problem (abbreviated IVP) for first order differential equations (abbreviated ODEs). We talked about this theorem:

Existence & Uniqueness of Solutions

If \(f(t,y)\) is continuous and the partial derivatives \(|\frac{\partial f}{\partial y}|\) are bounded for all \(t \in [a,b]\), then the initial value problem \[\frac{dy}{dt} = f(t,y) ~~~~ y = y_0\] has a unique solution \(y(t)\) defined on the interval \([a,b]\).

We looked at these examples:

  1. The differential equation \(\dfrac{dy}{dt} = \dfrac{-t}{y}\) has solutions that follow circular arcs centered at the origin. The IVP \(y(-1) = 0\) has two solutions \(y = \sqrt{1-t^2}\) and \(y = -\sqrt{1-t^2}\). Why doesn’t that contradict the theorem above? Which condition of the theorem fails for this IVP?

  2. Show that \(y = 2e^{5x}\) is the unique solution of the IVP \(y' = 5y\) with \(y(0) = 2\). How do you know that this is the only solution?

  3. The explosion equation is the ODE \[\frac{dy}{dt} = k y^2.\] The solution is \(y(t) = \frac{1}{C-kt}\). If \(k = 1\), then a solution to the IVP with \(y(0) = 1\) is \(y(t) = \dfrac{1}{1-t}\). But that solution is not defined when \(t=1\) (it has a vertical asymptote). Why is that consistent with the theorem above?

  4. For our last example, we looked at a model of a population growing, but with a seasonal limit on the population size. The model for the population \(y\) was \[\frac{dy}{dt} = \alpha y (\beta + \gamma \cos(2 \pi t) - y).\] Explain why the initial value problem has a unique solution with any initial condition.

To analyze the population example we used this Python code:

import numpy as np
import matplotlib.pyplot as plt

def EulersMethod(f,a,b,n,y0):
    # Returns two lists, one of t-values and the other of y-values. 
    h = (b-a)/n
    t, y = a, y0
    tvals, yvals = [], []
    for i in range(n+1):
      tvals.append(t)
      yvals.append(y)
      y += f(t,y)*h
      t += h
    return tvals, yvals


alpha, beta, gamma = 0.5, 1, 0.1 
f = lambda t,y: alpha*y*(beta+gamma*np.cos(2*np.pi*t)-y)

tvals, yvals = EulersMethod(f,0,10,1000,0.1)
plt.plot(tvals,yvals)

Fri, Apr 21

We started today with this question:

  1. Suppose \(y \in C^2[a,b]\) and \(t_i, t_{i+1} \in [a,b]\) with \(t_{i+1}-t_i = h\). Use the first degree Taylor polynomial centered at \(t_i\) to estimate \(y(t_{i+1})\) based on \(y(t_i)\) and \(y'(t_i)\). What is the remainder term for this expression?

Once we figured that out, we looked at how the iterates \(w_i\) generated by Euler’s method are different from the actual \(y\) values of the true solution. At each step, we have the correct y-value \(y(t_i)\) and the Euler’s method approximation. The gap between the two is the error \(E_i\). If we know \(E_i\), we can estimate how much worse the error after the next step will be.

The next step error \(E_{i+1}\) is the gap between \(y(t_{i+1})\) and \(w_{i+1}\), where \[y(t_{i+1}) = y(t_i) + y'(t_i) h + \tfrac{1}{2}y''(\xi) h^2\] and \[w_{i+1} = w_{i} + f(t_i,w_i)h + \delta_i\] where \(\delta_i\) is a term that represents any rounding error in computing \(w_{i+1}\). We will assume that all of the rounding error at each step is bounded by a single constant \(\delta\) (that is usually very small).

Estimating the gap, we get \[\begin{align*}E_{i+1} = |y_{i+1} - w_{i+1}| &\le E_i + Lh E_i + \tfrac{1}{2} M h^2 + \delta.\\ &\le E_i r + d \end{align*}\] where \(r = 1+Lh\) and \(c = \tfrac{1}{2} M h^2 + \delta\). Now there is a nice simple pattern for how the error grows. Initially \(E_0 = 0\). Then \[E_1 \le d\] \[E_2 \le rc + c\] \[E_3 \le r^2c + rc + c\] \[\vdots\] \[E_n \le r^{n-1} c + \ldots + rc + c.\] This is a geometric sum, so there is a formula: \[\begin{align*}E_n \le \left( \frac{r^n - 1}{r-1} \right) c &= \frac{[(1+L\frac{b-a}{n})^n - 1]}{L} \left(\frac{Mh}{2} + \frac{\delta}{h}\right). \\ &\le \frac{[e^{L(b-a)} - 1]}{L} \left(\frac{Mh}{2} + \frac{\delta}{h}\right). \end{align*}\]

The main take-away from this long calculation is that there is a limit to how accurate you can make Euler’s method. By making \(h\) smaller, the error gets smaller at first, but eventually the accumulated round-off error from taking so many steps gets large. It turns out that the minimum worst case error occurs when \[h = \sqrt{\frac{2\delta}{M}}.\]

  1. What \(h\) minimizes the worst case error in Euler’s method when \(\delta = 10^{-12}\) and \(M = 200\)?