# Finite difference method

In numerical analysis, finite-difference methods (FDM) are discretizations used for solving differential equations by approximating them with difference equations that finite differences approximate the derivatives.

FDMs convert a linear ordinary differential equations (ODE) or non-linear partial differential equations (PDE) into a system of equations that can be solved by matrix algebra techniques. The reduction of the differential equation to a system of algebraic equations makes the problem of finding the solution to a given ODE/PDE ideally suited to modern computers, hence the widespread use of FDMs in modern numerical analysis[1]. Today, FDMs are the dominant approach to numerical solutions of PDEs.[1]

## Derivation from Taylor's polynomial

First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylor's theorem, we can create a Taylor series expansion

${\displaystyle f(x_{0}+h)=f(x_{0})+{\frac {f'(x_{0})}{1!}}h+{\frac {f^{(2)}(x_{0})}{2!}}h^{2}+\cdots +{\frac {f^{(n)}(x_{0})}{n!}}h^{n}+R_{n}(x),}$

where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. We will derive an approximation for the first derivative of the function "f" by first truncating the Taylor polynomial:

${\displaystyle f(x_{0}+h)=f(x_{0})+f'(x_{0})h+R_{1}(x),}$

Setting, x0=a we have,

${\displaystyle f(a+h)=f(a)+f'(a)h+R_{1}(x),}$

Dividing across by h gives:

${\displaystyle {f(a+h) \over h}={f(a) \over h}+f'(a)+{R_{1}(x) \over h}}$

Solving for f'(a):

${\displaystyle f'(a)={f(a+h)-f(a) \over h}-{R_{1}(x) \over h}}$

Assuming that ${\displaystyle R_{1}(x)}$ is sufficiently small, the approximation of the first derivative of "f" is:

${\displaystyle f'(a)\approx {f(a+h)-f(a) \over h}.}$

## Accuracy and order

The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (that is, assuming no round-off).

To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image to the right). This means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner.

An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity ${\displaystyle f'(x_{i})-f'_{i}}$ if ${\displaystyle f'(x_{i})}$ refers to the exact value and ${\displaystyle f'_{i}}$ to the numerical approximation. The remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for ${\displaystyle f(x_{0}+h)}$, which is

${\displaystyle R_{n}(x_{0}+h)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}(h)^{n+1}}$, where ${\displaystyle x_{0}<\xi ,

the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that ${\displaystyle f(x_{i})=f(x_{0}+ih)}$,

${\displaystyle f(x_{0}+ih)=f(x_{0})+f'(x_{0})ih+{\frac {f''(\xi )}{2!}}(ih)^{2},}$

and with some algebraic manipulation, this leads to

${\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+{\frac {f''(\xi )}{2!}}ih,}$

and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is:

${\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+O(h).}$

This means that, in this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size.[2] Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality.[3][4]

The von Neumann and Courant-Friedrichs-Lewy criteria are often evaluated to determine the numerical model stability.[3][4][5][6]

## Example: ordinary differential equation

For example, consider the ordinary differential equation

${\displaystyle u'(x)=3u(x)+2.\,}$

The Euler method for solving this equation uses the finite difference quotient

${\displaystyle {\frac {u(x+h)-u(x)}{h}}\approx u'(x)}$

to approximate the differential equation by first substituting it for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get

${\displaystyle u(x+h)=u(x)+h(3u(x)+2).\,}$

The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation.

## Example: The heat equation

Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions

${\displaystyle U_{t}=U_{xx}\,}$
${\displaystyle U(0,t)=U(1,t)=0\,}$ (boundary condition)
${\displaystyle U(x,0)=U_{0}(x)\,}$ (initial condition)

One way to numerically solve this equation is to approximate all the derivatives by finite differences. We partition the domain in space using a mesh ${\displaystyle x_{0},...,x_{J}}$ and in time using a mesh ${\displaystyle t_{0},....,t_{N}}$. We assume a uniform partition both in space and in time, so the difference between two consecutive space points will be h and between two consecutive time points will be k. The points

${\displaystyle u(x_{j},t_{n})=u_{j}^{n}}$

will represent the numerical approximation of ${\displaystyle u(x_{j},t_{n}).}$

### Explicit method

Using a forward difference at time ${\displaystyle t_{n}}$ and a second-order central difference for the space derivative at position ${\displaystyle x_{j}}$ (FTCS) we get the recurrence equation:

${\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}.\,}$

This is an explicit method for solving the one-dimensional heat equation.

We can obtain ${\displaystyle u_{j}^{n+1}}$ from the other values this way:

${\displaystyle u_{j}^{n+1}=(1-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}}$

where ${\displaystyle r=k/h^{2}.}$

So, with this recurrence relation, and knowing the values at time n, one can obtain the corresponding values at time n+1. ${\displaystyle u_{0}^{n}}$ and ${\displaystyle u_{J}^{n}}$ must be replaced by the boundary conditions, in this example they are both 0.

This explicit method is known to be numerically stable and convergent whenever ${\displaystyle r\leq 1/2}$.[7] The numerical errors are proportional to the time step and the square of the space step:

${\displaystyle \Delta u=O(k)+O(h^{2})\,}$

### Implicit method

If we use the backward difference at time ${\displaystyle t_{n+1}}$ and a second-order central difference for the space derivative at position ${\displaystyle x_{j}}$ (The Backward Time, Centered Space Method "BTCS") we get the recurrence equation:

${\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}.\,}$

This is an implicit method for solving the one-dimensional heat equation.

We can obtain ${\displaystyle u_{j}^{n+1}}$ from solving a system of linear equations:

${\displaystyle (1+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=u_{j}^{n}}$

The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step:

${\displaystyle \Delta u=O(k)+O(h^{2}).\,}$

### Crank–Nicolson method

Finally if we use the central difference at time ${\displaystyle t_{n+1/2}}$ and a second-order central difference for the space derivative at position ${\displaystyle x_{j}}$ ("CTCS") we get the recurrence equation:

${\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {1}{2}}\left({\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}+{\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}\right).\,}$

This formula is known as the Crank–Nicolson method.

We can obtain ${\displaystyle u_{j}^{n+1}}$ from solving a system of linear equations:

${\displaystyle (2+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=(2-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}}$

The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step:

${\displaystyle \Delta u=O(k^{2})+O(h^{2}).\,}$

Usually the CrankNicolson scheme is the most accurate scheme for small time steps. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. The implicit scheme works the best for large time steps.

### Comparison

The figures below present the solutions given by the above methods to approximate the heat equation

${\displaystyle U_{t}=\alpha U_{xx},\quad \alpha ={\frac {1}{\pi ^{2}}},}$

with the boundary condition

${\displaystyle U(0,t)=U(1,t)=0.}$

The exact solution is

${\displaystyle U(x,t)={\frac {1}{\pi ^{2}}}e^{-t}\sin(\pi x).}$
Comparison of Finite Difference Methods
Explicit method (not stable)
Implicit method (stable)
Crank-Nicolson method (stable)

## Example: The Laplace operator

The (continuous) Laplace operator in ${\displaystyle n}$-dimensions is given by ${\displaystyle \Delta u(x)=\sum _{i=1}^{n}\partial _{i}^{2}u(x)}$. The discrete Laplace operator ${\displaystyle \Delta _{h}u}$ depends on the dimension ${\displaystyle n}$.

In 1D the Laplace operator is approximated as

${\displaystyle \Delta u(x)=u''(x)\approx {\frac {u(x-h)-2u(x)+u(x+h)}{h^{2}}}=:\Delta _{h}u(x)\,.}$

This approximation is usually expressed via the following stencil

${\displaystyle {\frac {1}{h^{2}}}{\begin{bmatrix}1&-2&1\end{bmatrix}}}$

and which represents a symmetric, tridiagonal matrix. For an equidistant grid one gets a Toeplitz matrix.

The 2D case shows all the characteristics of the more general nD case. Each second partial derivative needs to be approximated similar to the 1D case

{\displaystyle {\begin{aligned}\Delta u(x,y)&=u_{xx}(x,y)+u_{yy}(x,y)\\&\approx {\frac {u(x-h,y)-2u(x,y)+u(x+h,y)}{h^{2}}}+{\frac {u(x,y-h)-2u(x,y)+u(x,y+h)}{h^{2}}}\\&={\frac {u(x-h,y)+u(x+h,y)-4u(x,y)+u(x,y-h)+u(x,y+h)}{h^{2}}}\\&=:\Delta _{h}u(x,y)\,,\end{aligned}}}

which is usually given by the following stencil

${\displaystyle {\frac {1}{h^{2}}}{\begin{bmatrix}&1\\1&-4&1\\&1\end{bmatrix}}\,.}$

### Consistency

Consistency of the above-mentioned approximation can be shown for highly regular functions, such as ${\displaystyle u\in C^{4}(\Omega )}$. The statement is

${\displaystyle \Delta u-\Delta _{h}u={\mathcal {O}}(h^{2})\,.}$

To proof this one needs to substitute Taylor Series expansions up to order 3 into the discrete Laplace operator.

### Properties

#### Subharmonic

Similar to continuous subharmonic functions one can define subharmonic functions for finite-difference approximations ${\displaystyle u_{h}}$

${\displaystyle -\Delta _{h}u_{h}\leq 0\,.}$

#### Mean value

One can define a general stencil of positive type via

${\displaystyle {\begin{bmatrix}&\alpha _{N}\\\alpha _{W}&-\alpha _{C}&\alpha _{E}\\&\alpha _{S}\end{bmatrix}}\,,\quad \alpha _{i}>0\,,\quad \alpha _{C}=\sum _{i\in \{N,E,S,W\}}\alpha _{i}\,.}$

If ${\displaystyle u_{h}}$ is (discrete) subharmonic then the following mean value property holds

${\displaystyle u_{h}(x_{C})\leq {\frac {\sum _{i\in \{N,E,S,W\}}\alpha _{i}u_{h}(x_{i})}{\sum _{i\in \{N,E,S,W\}}\alpha _{i}}}\,,}$

where the approximation is evaluated on points of the grid, and the stencil is assumed to be of positive type.

A similar mean value property also holds for the continuous case.

#### Maximum principle

For a (discrete) subharmonic function ${\displaystyle u_{h}}$ the following holds

${\displaystyle \max _{\Omega _{h}}u_{h}\leq \max _{\partial \Omega _{h}}u_{h}\,,}$

where ${\displaystyle \Omega _{h},\partial \Omega _{h}}$ are discretizations of the continuous domain ${\displaystyle \Omega }$, respectively the boundary ${\displaystyle \partial \Omega }$.

A similar maximum principle also holds for the continuous case.

## The SBP-SAT method

The SBP-SAT method is a stable and accurate technique for discretizing and imposing boundary conditions of a well-posed partial differential equation using high order finite differences.[8][9]The method is based on finite differences where the differentiation operators exhibit summation-by-parts properties. Typically, these operators consist of differentiation matrices with central difference stencils in the interior with carefully chosen one-sided boundary stencils designed to mimic integration-by-parts in the discrete setting. Using the SAT technique, the boundary conditions of the PDE are imposed weakly, where the boundary values are "pulled" towards the desired conditions rather than exactly fulfilled. If the tuning parameters (inherent to the SAT technique) are chosen properly, the resulting system of ODE's will exhibit similar energy behavior as the continuous PDE, i.e. the system has no non-physical energy growth. This guarantees stability if an integration scheme with a stability region that includes parts of the imaginary axis, such as the fourth order Runge-Kutta method, is used. This makes the SAT technique an attractive method of imposing boundary conditions for higher order finite difference methods, in contrast to for example the injection method, which typically will not be stable if high order differentiation operators are used.

## References

1. Christian Grossmann; Hans-G. Roos; Martin Stynes (2007). Numerical Treatment of Partial Differential Equations. Springer Science & Business Media. p. 23. ISBN 978-3-540-71584-9.
2. Arieh Iserles (2008). A first course in the numerical analysis of differential equations. Cambridge University Press. p. 23. ISBN 9780521734905.
3. Hoffman JD; Frankel S (2001). Numerical methods for engineers and scientists. CRC Press, Boca Raton.
4. Jaluria Y; Atluri S (1994). "Computational heat transfer". Computational Mechanics. 14: 385–386. doi:10.1007/BF00377593.
5. Majumdar P (2005). Computational methods for heat and mass transfer (1st ed.). Taylor and Francis, New York.
6. Smith GD (1985). Numerical solution of partial differential equations: finite difference methods (3rd ed.). Oxford University Press.
7. Crank, J. The Mathematics of Diffusion. 2nd Edition, Oxford, 1975, p. 143.
8. Bo Strand (1994). Summation by Parts for Finite Difference Approximations for d/dx. Journal of Computational Physics. doi:10.1006/jcph.1994.1005.
9. Mark H. Carpenter; David I. Gottlieb; Saul S. Abarbanel (1994). Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes. Journal of Computational Physics. doi:10.1006/jcph.1994.1057.