# Order of accuracy

In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution. Consider $u$ , the exact solution to a differential equation in an appropriate normed space $(V,||\ ||)$ . Consider a numerical approximation $u_{h}$ , where $h$ is a parameter characterizing the approximation, such as the step size in a finite difference scheme or the diameter of the cells in a finite element method. The numerical solution $u_{h}$ is said to be $n$ th-order accurate if the error, $E(h):=||u-u_{h}||$ is proportional to the step-size $h$ to the $n$ th power;

$E(h)=||u-u_{h}||\leq Ch^{n}$ Where the constant $C$ is independent of h and usually depends on the solution $u$ .. Using the big O notation an $n$ th-order accurate numerical method is notated as

$||u-u_{h}||=O(h^{n})$ This definition is strictly depended on the norm used in the space; the choice of such norm is fundamental to estimate the rate of convergence and, in general, all numerical errors correctly.

The size of the error of a first-order accurate approximation is directly proportional to $h$ . Partial differential equations which vary over both time and space are said to be accurate to order $n$ in time and to order $m$ in space.