# Numerical error

In software engineering and mathematics, **numerical error** is the error in the numerical computations.

## Types

It can be the combined effect of two kinds of error in a calculation.

- the first is caused by the finite precision of computations involving floating-point or integer values
- the second usually called truncation error is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation. The term truncation comes from the fact that either these simplifications usually involve the truncation of an infinite series expansion so as to make the computation possible and practical, or because the least significant bits of an arithmetic operation are thrown away.

## Measure

Floating-point numerical error is often measured in ULP (unit in the last place).

## See also

## References

*Accuracy and Stability of Numerical Algorithms*, Nicholas J. Higham, ISBN 0-89871-355-2- "Computational Error And Complexity In Science And Engineering", V. Lakshmikantham, S.K. Sen, ISBN 0444518606

This article is issued from
Wikipedia.
The text is licensed under Creative
Commons - Attribution - Sharealike.
Additional terms may apply for the media files.