In the mathematical field of numerical analysis, Runge's phenomenon (German: [ˈʁʊŋə]) is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions. The discovery was important because it shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations.
The Weierstrass approximation theorem states that for every continuous function f(x) defined on an interval [a,b], there exists a set of polynomial functions Pn(x) for n=0, 1, 2, …, each of degree at most n, that approximates f(x) with uniform convergence over [a,b] as n tends to infinity, that is,
Consider the case where one desires to interpolate through n+1 equispaced points of a function f(x) using the n-degree polynomial Pn(x) that passes through those points. Naturally, one might expect from Weierstrass' theorem that using more points would lead to a more accurate reconstruction of f(x). However, this particular set of polynomial functions Pn(x) is not guaranteed to have the property of uniform convergence; the theorem only states that a set of polynomial functions exists, without providing a general method of finding one.
The Pn(x) produced in this manner may in fact diverge away from f(x) as n increases; this typically occurs in an oscillating pattern that magnifies near the ends of the interpolation points. This phenomenon is attributed to Runge.
Consider the Runge function
with a polynomial Pn(x) of degree ≤ n, the resulting interpolation oscillates toward the end of the interval, i.e. close to −1 and 1. It can even be proven that the interpolation error increases (without bound) when the degree of the polynomial is increased:
This shows that high-degree polynomial interpolation at equidistant points can be troublesome.
Runge's phenomenon is the consequence of two properties of this problem.
- The magnitude of the n-th order derivatives of this particular function grows quickly when n increases.
- The equidistance between points leads to a Lebesgue constant that increases quickly when n increases.
The phenomenon is graphically obvious because both properties combine to increase the magnitude of the oscillations.
The error between the generating function and the interpolating polynomial of order n is given by
for some in (−1, 1). Thus,
Denote by the nodal function
and let be the maximum of the magnitude of the function:
It is elementary to prove that with equidistant nodes
where is the step size.
Moreover, assume that the (n+1)-th derivative of is bounded, i.e.
But the magnitude of the (n+1)-th derivative of Runge's function increases when n increases, since . The consequence is that the resulting upper bound, , tends to infinity when n tends to infinity.
Although often used to explain the Runge phenomenon, the fact that the upper bound of the error goes to infinity does not necessarily imply, of course, that the error itself also diverges with n.
Mitigations to the problem
Change of interpolation points
The oscillation can be minimized by using nodes that are distributed more densely towards the edges of the interval, specifically, with asymptotic density (on the interval [−1,1]) given by the formula . A standard example of such a set of nodes is Chebyshev nodes, for which the maximum error in approximating the Runge function is guaranteed to diminish with increasing polynomial order. The phenomenon demonstrates that high degree polynomials are generally unsuitable for interpolation with equidistant nodes.
S-Runge algorithm without resampling
When resampling on well-behaved sets of nodes is not feasible, the S-Runge algorithm can be considered. In this approach, the original set of nodes is mapped on the set of Chebyshev nodes, providing a stable polynomial reconstruction. The peculiarity of this method is that there is no need of resampling on the mapped nodes, which are also called fake nodes. A Python implementation of this procedure can be found here.
Use of piecewise polynomials
The problem can be avoided by using spline curves which are piecewise polynomials. When trying to decrease the interpolation error one can increase the number of polynomial pieces which are used to construct the spline instead of increasing the degree of the polynomials used.
One can also fit a polynomial of higher degree (for instance, with points use a polynomial of order instead of ), and fit an interpolating polynomial whose first (or second) derivative has minimal norm.
A similar approach is to minimize a constrained version of the distance between the polynomial's derivative and the mean value of its derivative. Explicitly, to minimize
where and , with respect to the polynomial coefficients and the Lagrange multipliers, . When , the constraint equations generated by the Lagrange multipliers reduce to the minimum polynomial that passes through all points. At the opposite end, will approach a form very similar to a piecewise polynomials approximation. When , in particular, approaches the linear piecewise polynomials, i.e. connecting the interpolation points with straight lines.
The role played by in the process of minimizing is to control the importance of the size of the fluctuations away from the mean value. The larger is, the more large fluctuations are penalized compared to small ones. The greatest advantage of the Euclidean norm, , is that it allows for analytic solutions and it guarantees that will only have a single minimum. When there can be multiple minima in , making it difficult to ensure that a particular minimum found will be the global minimum instead of a local one.
Least squares fitting
Another method is fitting a polynomial of lower degree using the method of least squares. Generally, when using equidistant points, if then least squares approximation is well-conditioned.
Using Bernstein polynomials, one can uniformly approximate every continuous function in a closed interval, although this method is rather computationally expensive.
Related statements from the approximation theory
For every predefined table of interpolation nodes there is a continuous function for which the sequence of interpolation polynomials on those nodes diverges. For every continuous function there is a table of nodes on which the interpolation process converges. Chebyshev interpolation (i.e., on Chebyshev nodes) converges uniformly for every absolutely continuous function.
- Runge, Carl (1901), "Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten", Zeitschrift für Mathematik und Physik, 46: 224–243. available at www.archive.org
- Epperson, James (1987). "On the Runge example". Amer. Math. Monthly. 94: 329–341. doi:10.2307/2323093.
- Berrut, Jean-Paul; Trefethen, Lloyd N. (2004), "Barycentric Lagrange interpolation", SIAM Review, 46 (3): 501–517, CiteSeerX 10.1.1.15.5097, doi:10.1137/S0036144502417715, ISSN 1095-7200
- De Marchi, Stefano; Marchetti, Francesco; Perracchione, Emma; Poggiali, Davide (2020), "Polynomial interpolation via mapped bases without resampling", J. Comput. Appl. Math., 364, doi:10.1016/j.cam.2019.112347, ISSN 0377-0427
- Dahlquist, Germund; Björk, Åke (1974), "4.3.4. Equidistant Interpolation and the Runge Phenomenon", Numerical Methods, pp. 101–103, ISBN 0-13-627315-7
- Cheney, Ward; Light, Will (2000), A Course in Approximation Theory, Brooks/Cole, p. 19, ISBN 0-534-36224-9