# Geometrical properties of polynomial roots

In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a set of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.

Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots. Such bounds are widely used for root-finding algorithms for polynomials, either for tuning them, or for computing their computational complexity

Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degree n with real coefficients, which is less than ${\displaystyle 1+{\frac {2}{\pi }}\ln(n)}$ for n sufficiently large.

${\displaystyle p(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n},}$

where ${\displaystyle a_{0},\dots ,a_{n}}$ are real or complex numbers and ${\displaystyle a_{n}\neq 0}$; thus n is the degree of the polynomial.

## Continuous dependence on coefficients

The n roots of a polynomial of degree n depend continuously on the coefficients. For simple roots, this results immediately from the implicit function theorem. This is true also for multiple roots, but some care is needed for the proof.

A small change of coefficients may induce a dramatic change of the roots, including the change of a real root into a complex root with a rather large imaginary part (see Wilkinson's polynomial). A consequence is that, for classical numeric root-finding algorithms, the problem of approximating the roots given the coefficients is ill-conditioned.

## Conjugation

The complex conjugate root theorem states that if the coefficients of a polynomial are real, then the non-real roots appear in pairs of the form (a + ib, aib).

It follows that the roots of a polynomial with real coefficients are mirror-symmetric with respect to the real axis.

This can be extended to algebraic conjugation: the roots of a polynomial with rational coefficients are conjugate (that is, invariant) under the action of the Galois group of the polynomial. However, this symmetry can rarely be interpreted geometrically.

## Bounds on all roots

Upper bounds on the absolute values of polynomial roots are widely used for root-finding algorithms, either for limiting the regions where roots should be searched, or for the computation of the computational complexity of these algorithms.

Many such bounds have been given, and the sharper one depends generally of the specific sequence of coefficient that are considered. Most bounds are greater or equal to one, and are thus not sharp for a polynomial which have only roots of absolute values lower than one. However, such polynomials are very rare, as shown below.

Any upper bound on the absolute values of roots provides a corresponding lower bound. In fact, if ${\displaystyle a_{n}\neq 0,}$ and U is an upper bound of the absolute values of the roots of

${\displaystyle a_{0}+a_{1}x+\cdots +a_{n}x^{n},}$

then 1/U is a lower bound of the absolute values of

${\displaystyle a_{n}+a_{n-1}x+\cdots +a_{0}x^{n},}$

since the roots of either polynomial are the multiplicative inverses of the roots of the other. Therefore, in the remainder of the article lower bounds will not be given explicitly.

### Lagrange's and Cauchy's bounds

Lagrange and Cauchy were the firsts to provide upper bounds on all complex roots.[1] Lagrange's bound is[2]

${\displaystyle \max \left\{1,\sum _{i=0}^{n-1}\left|{\frac {a_{i}}{a_{n}}}\right|\right\},}$

and Cauchy's bound is[3]

${\displaystyle 1+\max \left\{\left|{\frac {a_{n-1}}{a_{n}}}\right|,\left|{\frac {a_{n-2}}{a_{n}}}\right|,\ldots ,\left|{\frac {a_{0}}{a_{n}}}\right|\right\}}$

Lagrange's bound is sharper (smaller) than Cauchy's bound only when 1 is larger than the sum of all ${\displaystyle \left|{\frac {a_{i}}{a_{n}}}\right|}$ but the largest. This is relatively rare in practice, and explains why Cauchy's bound is more widely used than Lagrange's.

Both bounds result from the Gershgorin circle theorem applied to the companion matrix of the polynomial and its transpose. They can also be proved by elementary methods.

Proof of Lagrange's and Cauchy's bounds

If z is a root of the polynomial, and |z| ≥ 1 one has

${\displaystyle |a_{n}||z^{n}|=\left|\sum _{i=0}^{n-1}a_{i}z^{i}\right|\leq \sum _{i=0}^{n-1}|a_{i}z^{i}|\leq \sum _{i=0}^{n-1}|a_{i}||z|^{n-1}.}$

Dividing by ${\displaystyle |a_{n}||z|^{n-1},}$ one gets

${\displaystyle |z|\leq \sum _{i=0}^{n-1}{\frac {|a_{i}|}{|a_{n}|}},}$

which is Lagrange's bound when there is at least one root of absolute value larger than 1. Otherwise, 1 is a bound on the roots, and is not larger than Lagrange's bound.

Similarly, for Cauchy's bound, one has, if |z| ≥ 1,

${\displaystyle |a_{n}||z^{n}|=\left|\sum _{i=0}^{n-1}a_{i}z^{i}\right|\leq \sum _{i=0}^{n-1}|a_{i}z^{i}|\leq \max |a_{i}|\sum _{i=0}^{n-1}|z|^{i}={\frac {|z|^{n}-1}{|z|-1}}\max |a_{i}|\leq {\frac {|z|^{n}}{|z|-1}}\max |a_{i}|.}$

Thus

${\displaystyle |a_{n}|(|z|-1)\leq \max |a_{i}|.}$

Solving in |z|, one gets Cauchy's bound if there is a root of absolute value larger than 1. Otherwise the bound is also correct, as Cauchy's bound is larger than 1.

These bounds are not invariant by scaling. That is, the roots of the polynomial p(sx) are the quotient by s of the root of p, and the bounds given for the roots of p(sx) are not the quotient by s of the bounds of p. Thus, one may get sharper bounds by minimizing over possible scalings. This gives

${\displaystyle \min _{s\in \mathbb {R} _{+}}\left(\max \left\{s,\sum _{i=0}^{n-1}\left|{\frac {a_{i}}{a_{n}}}\right|s^{i-n+1}\right\}\right),}$

and

${\displaystyle \min _{s\in \mathbb {R} _{+}}\left(s+\max _{0\leq i\leq n-1}\left(\left|{\frac {a_{i}}{a_{n}}}\right|s^{i-n+1}\right)\right),}$

for Lagrange's and Cauchy's bounds respectively.

Another bound, originally given by Lagrange, but attributed to Zassenhaus by Donald Knuth, is [4]

${\displaystyle 2\max \left\{\left|{\frac {a_{n-1}}{a_{n}}}\right|,\left|{\frac {a_{n-2}}{a_{n}}}\right|^{1/2},\ldots ,\left|{\frac {a_{0}}{a_{n}}}\right|^{1/n}\right\}.}$

This bound is invariant by scaling.

Proof of the preceding bound

Let A be the largest ${\displaystyle \left|{\frac {a_{i}}{a_{n}}}\right|^{\frac {1}{n-i}}}$ for 0 ≤ i < n. Thus one has

${\displaystyle {\frac {|a_{i}|}{|a_{n}|}}\leq A^{n-i}}$

for ${\displaystyle 0\leq i If z is a root of p, one has

${\displaystyle -a_{n}z^{n}=\sum _{i=0}^{n-1}a_{i}z^{i},}$

and thus, after dividing by ${\displaystyle a_{n},}$

{\displaystyle {\begin{aligned}|z|^{n}&\leq \sum _{i=0}^{n-1}A^{n-i}|z|^{i}\\&=A{\frac {|z|^{n}-A^{n}}{|z|-A}}.\end{aligned}}}

As we want to prove |z| ≤ 2A, we may suppose that |z| > A (otherwise there is nothing to prove). Thus

${\displaystyle |z|^{n}\leq A{\frac {|z|^{n}}{|z|-A}},}$

which gives the result, since ${\displaystyle |z|>A.}$

Lagrange has improved this latter bound into the sum of the two largest values (possibly equal) in the sequence[4]

${\displaystyle \left[\left|{\frac {a_{n-1}}{a_{n}}}\right|,\left|{\frac {a_{n-2}}{a_{n}}}\right|^{1/2},\ldots ,\left|{\frac {a_{0}}{a_{n}}}\right|^{1/n}\right].}$

Lagrange provided also the bound

${\displaystyle \sum _{i}\left|{\frac {a_{i}}{a_{i+1}}}\right|,}$

where ${\displaystyle a_{i}}$ denotes the ith nonzero coefficient when the terms of the polynomials are sorted by increasing degrees.

### Using Hölder's inequality

Hölder's inequality allows extends Lagrange's and Cauchy's bounds to every h-norm. The h-norm of a sequence

${\displaystyle s=(a_{0},\ldots ,a_{n})}$

is

${\displaystyle \|s\|_{h}=\left(\sum _{i=1}^{n}|a_{i}|^{h}\right)^{1/h},}$

for any real number h ≥ 1, and

${\displaystyle \|s\|_{\infty }=\textstyle {\max _{i=1}^{n}}|a_{i}|.}$

If ${\displaystyle {\frac {1}{h}}+{\frac {1}{k}}=1,}$ with 1 ≤ h, k ≤ ∞, and 1 / ∞ = 0, an upper bound on the absolute values of the roots of p is

${\displaystyle {\frac {1}{|a_{n}|}}\left\|(|a_{n}|,\left\|(|a_{n-1}|,\ldots ,|a_{0}|\right)\|_{h}\right)\|_{k}.}$

For k = 1 and k = ∞, one gets respectively Cauchy's and Lagrange's bounds.

For h = k = 1/2, one has the bound

${\displaystyle {\frac {1}{|a_{n}|}}{\sqrt {|a_{n}|^{2}+|a_{n-1}|^{2}+\cdots +|a_{0}|^{2}}}.}$

This is not only a bound of the absolute values of the roots, but also a bound of the product of their absolute values larger than 1; see § Landau's inequality, below.

Proof

Let z be a root of the polynomial

${\displaystyle p(x)=a^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}.}$

Setting

${\displaystyle A=\left({\frac {|a_{n-1}|}{|a_{n}|}},\ldots ,{\frac {|a_{1}|}{|a_{n}|}},{\frac {|a_{0}|}{|a_{n}|}}\right),}$

we have to prove that every root z of p satisfies

${\displaystyle z\leq \left\|(1,\left\|A\right\|_{h}\right)\|_{k}.}$

If ${\displaystyle |z|\leq 1,}$ the inequality is true; so, one may suppose ${\displaystyle |z|>1}$ for the remainder of the proof.

Writing the equation as

${\displaystyle -z^{n}={\frac {a_{n-1}}{a_{n}}}z^{n-1}+\cdots +{\frac {a_{1}}{a_{n}}}z+a_{0},}$

the Hölder's inequality implies

${\displaystyle |z|^{n}\leq \|A\|_{h}\cdot \left\|(z^{n-1},\ldots ,z,1)\right\|_{k}.}$

If k = 1, this is

${\displaystyle |z|^{n}\leq \|A\|_{1}\max \left\{|z|^{n-1},\ldots ,|z|,1\right\}=\|A\|_{1}|z|^{n-1}.}$

Thus

${\displaystyle |z|\leq \max\{1,\|A\|_{1}\}.}$

In the case 1 < k ≤ ∞, the summation formula for a geometric progression, gives

${\displaystyle |z|^{n}\leq \|A\|_{h}\left(|z|^{k(n-1)}+\cdots +|z|^{k}+1\right)^{\frac {1}{k}}=\|A\|_{h}\left({\frac {|z|^{kn}-1}{|z|^{k}-1}}\right)^{\frac {1}{k}}\leq \|A\|_{h}\left({\frac {|z|^{kn}}{|z|^{k}-1}}\right)^{\frac {1}{k}}.}$

Thus

${\displaystyle |z|^{kn}\leq \left(\|A\|_{h}\right)^{k}{\frac {|z|^{kn}}{|z|^{k}-1}},}$

which simplifies to

${\displaystyle |z|^{k}\leq 1+\left(\|A\|_{h}\right)^{k}.}$

Thus, in all cases

${\displaystyle |z|\leq \left\|\left(1,\|A\|_{h}\right)\right\|_{k},}$

which finishes the proof.

### Other bounds

Many other upper bounds for the magnitudes of all roots have been given.[5]

Fujiwara's bound[6]

${\displaystyle 2\,\max \left\{\left|{\frac {a_{n-1}}{a_{n}}}\right|,\left|{\frac {a_{n-2}}{a_{n}}}\right|^{\frac {1}{2}},\ldots ,\left|{\frac {a_{1}}{a_{n}}}\right|^{\frac {1}{n-1}},\left|{\frac {a_{0}}{2a_{n}}}\right|^{\frac {1}{n}}\right\},}$

improves slightly an above given bound by dividing by two the last argument of the maximum.

Kojima's bound is[7]

${\displaystyle 2\,\max \left\{\left|{\frac {a_{n-1}}{a_{n}}}\right|,\left|{\frac {a_{n-2}}{a_{n-1}}}\right|,\ldots ,\left|{\frac {a_{0}}{2a_{1}}}\right|\right\},}$

where ${\displaystyle a_{i}}$ denotes the ith nonzero coefficient when the terms of the polynomials are sorted by increasing degrees. If all coefficients are nonzero, Fujiwara's bound is sharper, since each element in Fujiwara's bound is the geometric mean of first elements in Kojima's bound.

Sun and Hsieh obtained another improvement on Cauchy's bound.[8] Assume the polynomial is monic with general term aixi. Sun and Hsieh showed that upper bounds 1 + d1 and 1 + d2 could be obtained from the following equations.

${\displaystyle d_{1}={\tfrac {1}{2}}\left((|a_{n-1}|-1)+{\sqrt {(|a_{n-1}|-1)^{2}+4a}}\right),\qquad a=\max\{|a_{i}|\}.}$

d2 is the positive root of the cubic equation

${\displaystyle Q(x)=x^{3}+(2-|a_{n-1}|)x^{2}+(1-|a_{n-1}|-|a_{n-2}|)x-a,\qquad a=\max\{|a_{i}|\}}$

They also noted that d2d1

## Landau's inequality

The previous bounds are upper bounds for each root separately. Landau's inequality provides an upper bound for the absolute values of the product of the roots that have an absolute value greater than one. This inequality, discovered in 1905 by Edmund Landau[9] has been forgotten and rediscovered at least three times during the 20th century.[10][11][12]

This bound of the product of roots is not much greater than the best preceding bounds of each root separately.[13] Let ${\displaystyle z_{1},\ldots ,z_{n}}$ be the n roots of the polynomial p. If

${\displaystyle M(p)=|a_{n}|\prod _{j=1}^{n}\max(1,|z_{j}|)}$

is the Mahler measure of p, then

${\displaystyle M(p)\leq {\sqrt {\sum _{k=0}^{n}|a_{k}|^{2}}}.}$

Surprisingly, this bound of the product of the absolute values larger than 1 of the roots is not much larger than the best bounds of one root that have been given above for a single root. This bound is even exactly equal to one of the bounds that are obtained using Hölder's inequality.

This bound is also useful to bound the coefficients of a divisor of a polynomial with integer coefficients:[14] if

${\displaystyle q=\sum _{k=0}^{m}b_{k}x^{k}}$

is a divisor of p, then

${\displaystyle |b_{m}|\leq |a_{n}|,}$

and, by Vieta's formulas,

${\displaystyle {\frac {|b_{i}|}{|b_{m}|}}\leq {\binom {m}{i}}{\frac {M(p)}{|a_{n}|}},}$

for i = 0, ..., m, where ${\displaystyle {\binom {m}{i}}}$ is a binomial coefficient. Thus

${\displaystyle |b_{i}|\leq {\binom {m}{i}}M(p)\leq {\binom {m}{i}}{\sqrt {\sum _{k=0}^{n}|a_{k}|^{2}}},}$

and

${\displaystyle \sum _{i=0}^{m}|b_{k}|\leq 2^{m}M(p)\leq 2^{m}{\sqrt {\sum _{k=0}^{n}|a_{k}|^{2}}}.}$

## Discs containing some roots

### From Rouché theorem

Rouché's theorem allows defining discs centered at zero and containing a given number of roots. More precisely, if there is a positive real number R and an integer 0 ≤ kn such that

${\displaystyle |a_{k}|R^{k}>|a_{0}|+\cdots +|a_{k-1}|R^{k-1}+|a_{k+1}|R^{k+1}+\cdots +|a_{n}|R^{n},}$

then there are exactly k roots, counted with multiplicity, of absolute value less than R.

Proof

If ${\displaystyle |z|=R,}$ then

{\displaystyle {\begin{aligned}|a_{0}&+\cdots +a_{k-1}z^{k-1}+a_{k+1}z^{k+1}+\cdots +a_{n}z^{n}|\\&\leq |a_{0}|+\cdots +|a_{k-1}|R^{k-1}+|a_{k+1}|R^{k+1}+\cdots +|a_{n}|R^{n}\\&\leq |a_{k}|R^{k}\leq |a_{k}z^{k}|.\end{aligned}}}

By Rouché's theorem, this implies directly that ${\displaystyle p(z)}$ and ${\displaystyle z^{k}}$ have the same number of roots of absolute values less than R, counted with multiplicities. As this number is k, the result is proved.

The above result may be applied if the polynomial

${\displaystyle h_{k}(x)=|a_{0}|+\cdots +|a_{k-1}|x^{k-1}-|a_{k}|x^{k}+|a_{k+1}|x^{k+1}+\cdots +|a_{n}|x^{n}.}$

takes a negative value for some positive real value of x.

In the remaining of the section, with suppose that an ≠ 0. If it is not the case, zero is a root, and the localization of the other roots may be studied by dividing the polynomial by a power of the indeterminate, for getting a polynomial with a nonzero constant term.

For k = 0 and k = n, Descartes' rule of signs shows that the polynomial has exactly one positive real root. If ${\displaystyle R_{0}}$ and ${\displaystyle R_{n}}$ are these root, the above result shows that all the roots verifies

${\displaystyle R_{0}\leq |z|\leq R_{1}.}$

A these inequalities apply also to ${\displaystyle h_{0}}$ and ${\displaystyle h_{n},}$ these bounds are optimal for polynomials with a given sequence of the absolute values of their coefficients. They are thus sharper than all bounds given in the preceding sections.

For 0 < k < n, Descartes' rule of signs implies that ${\displaystyle h_{k}(x)}$ either has two positive real roots that are not multiple, or is nonnegative for every positive value of x. So, the above result may be applied only in the first case. If ${\displaystyle R_{k,1} are these two roots, the above result implies that

${\displaystyle |z|\leq R_{k,1}}$

for k roots of p, and that

${\displaystyle |z|\geq R_{k,2}}$

for the nk other roots.

Instead of computing explicitly ${\displaystyle R_{k,1}}$ and ${\displaystyle R_{k,2},}$ it is generally sufficient to compute a value ${\displaystyle R_{k}}$ such that ${\displaystyle h_{k}(R_{k})<0}$ (necessarily ${\displaystyle R_{k,1}). These ${\displaystyle R_{k}}$ have the property of separating roots in terms of their absolute values: if, for h < k, both ${\displaystyle R_{h}}$ and ${\displaystyle R_{k}}$ exist, there are exactly kh roots z such that ${\displaystyle R_{h}<|z|

For computing ${\displaystyle R_{k},}$ one can use that ${\displaystyle {\frac {h(x)}{x^{k}}}}$ is a convex function (its second derivative is positive). Thus ${\displaystyle R_{k}}$ exists if and only if ${\displaystyle {\frac {h(x)}{x^{k}}}}$ is negative at its unique minimum. For computing this minimum, one can use any optimization method, or, alternatively, Newton's method for computing the unique positive zero of the derivative of ${\displaystyle {\frac {h(x)}{x^{k}}}}$ (it converges rapidly, as the derivative is a monotonic function).

One can increase the number of existing ${\displaystyle R_{k}}$'s by applying the root squaring operation of the Dandelin–Graeffe iteration. If the roots have distinct absolute values, one can eventually separate completely the roots in terms of their absolute values, that is compute n + 1 positive numbers ${\displaystyle R_{0} such there is exactly one root with an absolute value in the open interval ${\displaystyle (R_{k-1},R_{k}),}$ for k = 1, ..., n.

### From Gershgorin circle theorem

The Gershgorin circle theorem applied the companion matrix of the polynomial on a basis related to Lagrange interpolation provides discs centered at the interpolation points and each containing a root of the polynomial; see Durand–Kerner method § Root inclusion via Gerschgorin's circles for details.

If the interpolation points are close to the roots of the roots of the polynomial, the radiuses of the discs are small, and this is a key ingredient of Durand–Kerner method for computing polynomial roots.

## Bounds of real roots

For polynomials with real coefficients, it is often useful to bound only the real roots. It suffices to bound the positive roots, as the negative roots of p(x) are the positive roots of p(–x).

Clearly, every bound of all roots applies also for real roots. But in some contexts, tighter bounds of real roots are useful. For example, the efficiency of the method of continued fractions for real-root isolation strongly depends on tightness of a bound of positive roots. This has led to establish new bounds that are tighter than the general bounds of all roots. These bounds are generally expressed not only in terms of the absolute values of the coefficients, but also in terms of their signs.

Other bounds apply only to polynomials whose all roots are reals (see below).

### Bounds of positive real roots

For giving a bound of the positive roots, one can suppose ${\displaystyle a_{n}>0}$ without loss of generality, as changing the signs of all coefficients does not change the roots.

Every upper bound of the positive roots of

${\displaystyle q(x)=a_{n}x^{n}+\sum _{i=0}^{n-1}\min(0,a_{i})x^{i}}$

is also a bound for the real zeros of

${\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}}$.

In fact, if B is such a bound, for all x > B, one has p(x) ≥ q(x) > 0.

Applied to Cauchy's bound, this gives the upper bound

${\displaystyle 1+{\textstyle \max _{i=0}^{n-1}}{\frac {-a_{i}}{a_{n}}}}$

for the real roots of a polynomial with real coefficients. If this bound is not greater than 1, this means that all nonzero coefficients have the same sign, and that there is no positive root.

Similarly, another upper bound of the positive roots is

${\displaystyle 2\,{\max _{a_{i}a_{n}<0}}\left({\frac {-a_{i}}{a_{n}}}\right)^{\frac {1}{n-i}}.}$

If all nonzero coefficients have the same sign, there is no positive root, and the maximum must be defined as being zero.

Other bounds have been recently developed, mainly for the need of the method of continued fractions for real-root isolation.[15][16]

### Polynomials whose roots are all real

If all roots of a polynomial are real, Laguerre proved the following lower and upper bounds of the roots, by using what is now called Samuelson's inequality.[17]

Let ${\displaystyle \sum _{k=0}^{n}a_{k}x^{k}}$ be a polynomial with all real roots. Then its roots are located in the interval with endpoints

${\displaystyle -{\frac {a_{n-1}}{na_{n}}}\pm {\frac {n-1}{na_{n}}}{\sqrt {a_{n-1}^{2}-{\frac {2n}{n-1}}a_{n}a_{n-2}}}.}$

For example, the roots of the polynomial ${\displaystyle x^{4}+5x^{3}+5x^{2}-5x-6=(x+3)(x+2)(x+1)(x-1)}$ satisfy

${\displaystyle -3.8118<-{\frac {5}{4}}-{\frac {3}{4}}{\sqrt {\frac {35}{3}}}\leq x\leq -{\frac {5}{4}}+{\frac {3}{4}}{\sqrt {\frac {35}{3}}}<1.3118.}$

## Root separation

The root separation of a polynomial is the minimal distance between two roots, that is the minimum of the absolute values of the difference of two roots:

${\displaystyle \operatorname {sep} (p)=\min\{|\alpha -\beta |\;;\;\alpha \neq \beta {\text{ and }}p(\alpha )=p(\beta )=0\}}$

The root separation is a fundamental parameter of the computational complexity of root-finding algorithms for polynomials. In fact, the root separation determines the precision of number representation that is needed for being sure of distinguishing different roots. Also, for real-root isolation, it allows bounding the number of interval divisions that are needed for isolating all roots.

For polynomials with real or complex coefficients is not possible to express a lower bound of the root separation in terms of the degree and the absolute values of the coefficients only, because a small change on a single coefficient transforms a polynomial with multiple roots in a square-free polynomial with a small root separation, and essentially the same absolute values of the coefficient. However, involving the discriminant of the polynomial allows a lower bound.

For square-free polynomials with integer coefficients, the discriminant is an integer, and has thus an absolute value that is not lower than 1. This allows lower bounds for root separation that are independent from the discriminant.

Mignotte's separation bound is[18][19]

${\displaystyle \operatorname {sep} (p)>{\frac {{\sqrt {3}}\Delta (p)}{n^{n/2+1}(\|p\|_{2})^{n-1}}},}$

where ${\displaystyle \Delta (p)}$ is the discriminant, and ${\displaystyle \textstyle \|p\|_{2}={\sqrt {a_{0}^{2}+a_{1}^{2}+\dots +a_{n}^{2}}}.}$

For a square free polynomial with integer coefficients, this implies

${\displaystyle \operatorname {sep} (p)>{\frac {\sqrt {3}}{n^{n/2+1}(\|p\|_{2})^{n-1}}}>{\frac {1}{2^{2s^{2}}}},}$

where s is the bit size of p, that is the sum of the bitsize of its coefficients.

## Gauss–Lucas theorem

The Gauss–Lucas theorem states that the convex hull of the roots of a polynomial contains the roots of the derivative of the polynomial.

A sometimes useful corollary is that, if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial.

A related result is Bernstein's inequality. It states that for a polynomial P of degree n with derivative P′ we have

${\displaystyle \max _{|z|\leq 1}{\big |}P'(z){\big |}\leq n\max _{|z|\leq 1}{\big |}P(z){\big |}.}$

## Statistical distribution of the roots

If the coefficients ai of a random polynomial are independently and identically distributed with a mean of zero, most complex roots are on the unit circle or close to it. In particular, the real roots are mostly located near ±1, and, moreover, their expected number is, for a large degree, less than the natural logarithm of the degree.

If the coefficients are Gaussian distributed with a mean of zero and variance of σ then the mean density of real roots is given by the Kac formula[20][21]

${\displaystyle m(x)={\frac {\sqrt {A(x)C(x)-B(x)^{2}}}{\pi A(x)}}}$

where

{\displaystyle {\begin{aligned}A(x)&=\sigma \sum x^{2i}=\sigma {\frac {x^{2n}-1}{x-1}},\\B(x)&={\frac {1}{2}}{\frac {d}{dx}}A(x),\\C(x)&={\frac {1}{4}}{\frac {d^{2}}{dx^{2}}}A(x)+{\frac {1}{4x}}{\frac {d}{dx}}A(x).\end{aligned}}}

When the coefficients are Gaussian distributed with a non-zero mean and variance of σ, a similar but more complex formula is known.

### Real roots

For large n, the mean density of real roots near x is asymptotically

${\displaystyle m(x)={\frac {1}{\pi |1-x^{2}|}}}$

if ${\displaystyle x^{2}-1\neq 0,}$ and

${\displaystyle m(\pm 1)={\frac {1}{\pi }}{\sqrt {\frac {n^{2}-1}{12}}}}$

It follows that the expected number of real roots is, using big O notation

${\displaystyle N_{n}={\frac {2}{\pi }}\ln n+C+{\frac {2}{\pi n}}+O(n^{-2})}$

where C is a constant approximately equal to 0.6257358072.[22]

In other words, the expected number of real roots of a random polynomial of high degree is lower than the natural logarithm of the degree.

Kac, Erdös and others and other have shown that these results are insensitive to the distribution of the coefficients, if they are independent and have the same distribution with mean zero. However, if the variance of the ith coefficient equal to ${\displaystyle {\binom {n}{i}},}$ the expected number of real roots is ${\displaystyle {\sqrt {n}}.}$[22]

## Notes

1. Hirst, Holly P.; Macey, Wade T. (1997). "Bounding the Roots of Polynomials". The College Mathematics Journal. 28 (4): 292–295. JSTOR 2687152.
2. Lagrange J–L (1798) Traité de la résolution des équations numériques. Paris.
3. Cauchy Augustin-Louis (1829). Exercices de mathématique. Œuvres 2 (9) p.122
4. Yap 2000, §VI.2
5. Marden, M. (1966). Geometry of Polynomials. Amer. Math. Soc. ISBN 0-8218-1503-2.
6. Fujiwara, M. (1916). "Über die obere Schranke des absoluten Betrages der Wurzeln einer algebraischen Gleichung". Tohoku Mathematical Journal. First series. 10: 167–171.
7. Kojima, T. (1917). "On the limits of the roots of an algebraic equation". Tohoku Mathematical Journal. First series. 11: 119–127.
8. Sun, Y. J.; Hsieh, J. G. (1996). "A note on circular bound of polynomial zeros". IEEE Trans Circuits Syst. I. 43 (6): 476–478. doi:10.1109/81.503258.
9. E. Landeau, Sur quelques th&or&mes de M. Petrovic relatifs aux zéros des fonctions analytiques, Bull. Sot. Math. France 33 (1905), 251-261.
10. M. Mignotte. An inequality about factors of polynomials, Math. Comp. 28 (1974). 1153-1157.
11. W. Specht, Abschätzungen der Wurzeln algebraischer Gleichungen, Math. Z. 52 (1949). 310-321.
12. J. Vincente Gonçalves, L’inégalité de W. Specht. Univ. Lisboa Revista Fac. Ci A. Ci. Mat. 1 (195O), 167-171.
13. Mignotte, Maurice (1983). "Some useful bounds". Computer Algebra : Symbolic and Algebraic Computation. Vienna: Springer. pp. 259–263. ISBN 0-387-81776-X.
14. Mignotte, M. (1988). An inequality about irreducible factors of integer polynomials. Journal of number theory, 30(2), 156-166.
15. Akritas, Alkiviadis G.; Strzeboński, A. W.; Vigklas, P. S. (2008). "Improving the performance of the continued fractions method using new bounds of positive roots" (PDF). Nonlinear Analysis: Modelling and Control. 13: 265–279.
16. Ştefănescu, D. Bounds for Real Roots and Applications to Orthogonal Polynomials. In: V. G. Ganzha, E. W. Mayr and E. V. Vorozhtsov (Editors): Proceedings of the 10th International Workshop on Computer Algebra in Scientific Computing, CASC 2007, pp. 377 – 391, Bonn, Germany, September 16-20, 2007. LNCS 4770, Springer Verlag, Berlin, Heidelberg.
17. Laguerre E (1880). "Sur une méthode pour obtenir par approximation les racines d'une équation algébrique qui a toutes ses racines réelles". Nouvelles Annales de Mathématiques. 2. 19: 161–172, 193–202..
18. Yap 2000, § VI.7, Proposition 29
19. Collins, George E. (2001). "Polynomial minimum root separation" (PDF). Journal of Symbolic Computation. 32: 467–473. doi:10.1006/jsco.2001.0481.
20. Kac, M. (1943). "On the average number of real roots of a random algebraic equation". Bulletin of the American Mathematical Society. 49 (4): 314–320. doi:10.1090/S0002-9904-1943-07912-8.
21. Kac, M. (1948). "On the Average Number of Real Roots of a Random Algebraic Equation (II)". Proceedings of the London Mathematical Society. Second Series. 50 (1): 390–408. doi:10.1112/plms/s2-50.5.390.
22. Edelman, Alan; Kostlan, Eric (1995). "How many zeros of a random polynomial are real?" (PDF). Bulletin of the American Mathematical Society. 32 (1): 1–37. doi:10.1090/S0273-0979-1995-00571-9.