# Minimal polynomial (linear algebra)

In linear algebra, the **minimal polynomial** *μ _{A}* of an

*n*×

*n*matrix A over a field

**F**is the monic polynomial P over

**F**of least degree such that

*P*(

*A*) = 0. Any other polynomial Q with

*Q*(

*A*) = 0 is a (polynomial) multiple of

*μ*.

_{A}The following three statements are equivalent:

- λ is a root of
*μ*,_{A} - λ is a root of the characteristic polynomial
*χ*of A,_{A} - λ is an eigenvalue of matrix A.

The multiplicity of a root λ of *μ _{A}* is the largest power m such that ker((

*A*−

*λI*)

_{n}^{m})

*strictly*contains ker((

*A*−

*λI*)

_{n}^{m−1}). In other words, increasing the exponent up to m will give ever larger kernels, but further increasing the exponent beyond m will just give the same kernel.

If the field **F** is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in **F**) alone, in other words they may have irreducible polynomial factors of degree greater than 1. For irreducible polynomials P one has similar equivalences:

- P divides
*μ*,_{A} - P divides
*χ*,_{A} - the kernel of
*P*(*A*) has dimension at least 1. - the kernel of
*P*(*A*) has dimension at least deg(*P*).

Like the characteristic polynomial, the minimal polynomial does not depend on the base field, in other words considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason is somewhat different from for the characteristic polynomial (where it is immediate from the definition of determinants), namely the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of A: extending the base field will not introduce any new such relations (nor of course will it remove existing ones).

The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if A is a multiple *aI _{n}* of the identity matrix, then its minimal polynomial is

*X*−

*a*since the kernel of

*aI*−

_{n}*A*= 0 is already the entire space; on the other hand its characteristic polynomial is (

*X*−

*a*)

^{n}(the only eigenvalue is a, and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field).

## Formal definition

Given an endomorphism T on a finite-dimensional vector space V over a field **F**, let *I _{T}* be the set defined as

where **F**[*t*] is the space of all polynomials over the field **F**. *I _{T}* is a proper ideal of

**F**[

*t*]. Since

**F**is a field,

**F**[

*t*] is a principal ideal domain, thus any ideal is generated by a single polynomial, which is unique up to units in

**F**. A particular choice among the generators can be made, since precisely one of the generators is monic. The

**minimal polynomial**is thus defined to be the monic polynomial which generates

*I*. It is the monic polynomial of least degree in

_{T}*I*.

_{T}## Applications

An endomorphism φ of a finite dimensional vector space over a field **F** is diagonalizable if and only if its minimal polynomial factors completely over **F** into *distinct* linear factors. The fact that there is only one factor *X* − *λ* for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1. More generally, if φ satisfies a polynomial equation *P*(*φ*) = 0 where P factors into distinct linear factors over **F**, then it will be diagonalizable: its minimal polynomial is a divisor of P and therefore also factors into distinct linear factors. In particular one has:

*P*=*X*− 1: finite order endomorphisms of complex vector spaces are diagonalizable. For the special case^{ k}*k*= 2 of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than 2, since*X*^{ 2}− 1 = (*X*− 1)(*X*+ 1) is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups.*P*=*X*^{ 2}−*X*=*X*(*X*− 1): endomorphisms satisfying*φ*^{2}=*φ*are called projections, and are always diagonalizable (moreover their only eigenvalues are 0 and 1).- By contrast if
*μ*=_{φ}*X*with^{ k}*k*≥ 2 then φ (a nilpotent endomorphism) is not necessarily diagonalizable, since*X*has a repeated root 0.^{ k}

These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof.

## Computation

For a vector v in V define:

This definition satisfies the properties of a proper ideal. Let *μ*_{T,v} be the monic polynomial which generates it.

### Properties

- Since
*I*_{T,v}contains the minimal polynomial*μ*, the latter is divisible by_{T}*μ*_{T,v}. - If d is the least natural number such that
*v*,*T*(*v*), ...,*T*(^{d}*v*) are linearly dependent, then there exist unique*a*_{0},*a*_{1}, ...,*a*_{d−1}in**F**, not all zero, such thatand for these coefficients one has

- Let the subspace
*W*be the image of*μ*_{T,v}(*T*), which is T-stable. Since*μ*_{T,v}(*T*) annihilates at least the vectors*v*,*T*(*v*), ...,*T*^{d-1}(*v*), the codimension of W is at least d. - The minimal polynomial
*μ*is the product of_{T}*μ*_{T,v}and the minimal polynomial Q of the restriction of T to W. In the (likely) case that W has dimension 0 one has*Q*= 1 and therefore*μ*=_{T}*μ*_{T,v}; otherwise a recursive computation of Q suffices to find*μ*._{T}

### Example

Define T to be the endomorphism of **R**^{3} with matrix, on the canonical basis,

Taking the first canonical basis vector *e*_{1} and its repeated images by T one obtains

of which the first three are easily seen to be linearly independent, and therefore span all of **R**^{3}. The last one then necessarily is a linear combination of the first three, in fact

*T*^{ 3}⋅*e*_{1}= −4*T*^{ 2}⋅*e*_{1}−*T*⋅*e*_{1}+*e*_{1},

so that:

*μ*_{T,e1}=*X*^{ 3}+ 4*X*^{ 2}+*X*−*I*.

This is in fact also the minimal polynomial *μ _{T}* and the characteristic polynomial

*χ*: indeed

_{T}*μ*

_{T,e1}divides

*μ*which divides

_{T}*χ*, and since the first and last are of degree 3 and all are monic, they must all be the same. Another reason is that in general if any polynomial in T annihilates a vector v, then it also annihilates

_{T}*T*⋅

*v*(just apply T to the equation that says that it annihilates v), and therefore by iteration it annihilates the entire space generated by the iterated images by T of v; in the current case we have seen that for

*v*=

*e*

_{1}that space is all of

**R**

^{3}, so

*μ*

_{T,e1}(

*T*) = 0. Indeed one verifies for the full matrix that

*T*

^{ 3}+ 4

*T*

^{ 2}+

*T*−

*I*

_{3}is the null matrix:

## References

- Lang, Serge (2002),
*Algebra*, Graduate Texts in Mathematics,**211**(Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556