# General linear methods

General linear methods (GLMs) are a large class of numerical methods used to obtain numerical solutions to differential equations. This large class of methods in numerical analysis encompass multistage Runge–Kutta methods that use intermediate collocation points, as well as linear multistep methods that save a finite time history of the solution. John C. Butcher originally coined this term for these methods, and has written a series of review papers   a book chapter and a textbook on the topic. His collaborator, Zdzislaw Jackiewicz also has an extensive textbook on the topic. The original class of methods were originally proposed by Butcher (1965), Gear (1965) and Gragg and Stetter (1964).

## Some definitions

Numerical methods for first-order ordinary differential equations approximate solutions to initial value problems of the form

$y'=f(t,y),\quad y(t_{0})=y_{0}.$ The result is approximations for the value of $y(t)$ at discrete times $t_{i}$ :

$y_{i}\approx y(t_{i})\quad {\text{where}}\quad t_{i}=t_{0}+ih,$ where h is the time step (sometimes referred to as $\Delta t$ ).

## A description of the method

We follow Butcher (2006), pps 189–190 for our description, although we note that this method can be found elsewhere.

General linear methods make use of two integers, $r$ , the number of time points in history and $s$ , the number of collocation points. In the case of $r=1$ , these methods reduce to classical Runge–Kutta methods, and in the case of $s=1$ , these methods reduce to linear multistep methods.

Stage values $Y_{i}$ and stage derivatives, $F_{i},i=1,2,\dots s$ are computed from approximations, $y_{i}^{[n-1]},i=1,\dots ,r$ , at time step $n$ :

$y^{[n-1]}=\left[{\begin{matrix}y_{1}^{[n-1]}\\y_{2}^{[n-1]}\\\vdots \\y_{r}^{[n-1]}\\\end{matrix}}\right],\quad y^{[n]}=\left[{\begin{matrix}y_{1}^{[n]}\\y_{2}^{[n]}\\\vdots \\y_{r}^{[n]}\\\end{matrix}}\right],\quad Y=\left[{\begin{matrix}Y_{1}\\Y_{2}\\\vdots \\Y_{s}\end{matrix}}\right],\quad F=\left[{\begin{matrix}F_{1}\\F_{2}\\\vdots \\F_{s}\end{matrix}}\right].$ The stage values are defined by two matrices, $A=[a_{ij}]$ and $U=[u_{ij}]$ :

$Y_{i}=\sum _{j=1}^{s}a_{ij}hF_{j}+\sum _{j=1}^{r}u_{ij}y_{j}^{[n-1]},\qquad i=1,2,\dots ,s,$ and the update to time $t^{n}$ is defined by two matrices, $B=[b_{ij}]$ and $V=[v_{ij}]$ :

$y_{i}^{[n]}=\sum _{j=1}^{s}b_{ij}hF_{j}+\sum _{j=1}^{r}v_{ij}y_{j}^{[n-1]},\qquad i=1,2,\dots ,r.$ Given the four matrices, $A,U,B$ and $V$ , one can compactly write the analogue of a Butcher tableau as,

$\left[{\begin{matrix}Y\\y^{[n]}\end{matrix}}\right]=\left[{\begin{matrix}A\otimes I&U\otimes I\\B\otimes I&V\otimes I\end{matrix}}\right]\left[{\begin{matrix}F\\y^{[n-1]}\end{matrix}}\right],$ where $\otimes$ stands for the tensor product, and $F=f(Y)$ .

## Examples

We present an example described in (Butcher, 1996). This method consists of a single 'predicted' step, and 'corrected' step, that uses extra information about the time history, as well as a single intermediate stage value.

An intermediate stage value is defined as something that looks like it came from a linear multistep method:

$y_{n-1/2}^{*}=y_{n-2}+h\left({\frac {9}{8}}f(y_{n-1})+{\frac {3}{8}}f(y_{n-2})\right).$ An initial 'predictor' $y_{n}^{*}$ uses the stage value $y_{n-1/2}^{*}$ together with two pieces of time history:

$y_{n}^{*}={\frac {28}{5}}y_{n-1}-{\frac {23}{5}}y_{n-2}+h\left({\frac {32}{15}}f(y_{n-1/2}^{*})-4f(y_{n-1})-{\frac {26}{15}}f(y_{n-2})\right),$ and the final update is given by:

$y_{n}={\frac {32}{31}}y_{n-1}-{\frac {1}{31}}y_{n-2}+h\left({\frac {5}{31}}f(y_{n}^{*})+{\frac {64}{93}}f(y_{n-1/2}^{*})+{\frac {4}{31}}f(y_{n-1})-{\frac {1}{93}}f(y_{n-2})\right).$ The concise table representation for this method is given by:

$\left[{\begin{array}{ccc|cccc}0&0&0&0&1&{\frac {9}{8}}&{\frac {3}{8}}\\{\frac {32}{15}}&0&0&{\frac {28}{5}}&-{\frac {23}{5}}&-4&-{\frac {26}{15}}\\{\frac {64}{93}}&{\frac {5}{31}}&0&{\frac {32}{31}}&-{\frac {1}{31}}&{\frac {4}{31}}&-{\frac {1}{93}}\\\hline {\frac {64}{93}}&{\frac {5}{31}}&0&{\frac {32}{31}}&-{\frac {1}{31}}&{\frac {4}{31}}&-{\frac {1}{93}}\\0&0&0&1&0&0&0\\0&0&1&0&0&0&0\\0&0&0&0&0&1&0\\\end{array}}\right].$ 