# Invariant subspace

In mathematics, an **invariant subspace** of a linear mapping *T* : *V* → *V * from some vector space *V* to itself is a subspace *W* of *V* that is preserved by *T*; that is, *T*(*W*) ⊆ *W*.

## General description

Consider a linear mapping that transforms:

An **invariant subspace** of has the property that all vectors are transformed by into vectors also contained in . This can be stated as

### Trivial examples of invariant subspaces

- : Since maps every vector in into
- : Since a linear map has to map

### Uni-dimensional invariant subspace *U*

*U*

The basis of this uni-dimensional space is simply a vector . Consequently, any vector can be represented as where is a real scalar. If we represent by a matrix then, for to be an invariant subspace it must satisfy

We know that with .

Therefore, the condition for existence of a uni-dimensional invariant subspace is expressed as:

- , where is a scalar (in the field of the vector space ).

Note that this is the typical formulation of an eigenvalue problem, which means that any eigenvector of forms a uni-dimensional invariant subspace in

## Formal description

An **invariant subspace** of a linear mapping

from some vector space *V* to itself is a subspace *W* of *V* such that *T*(*W*) is contained in *W*. An invariant subspace of *T* is also said to be ** T invariant**.

If *W* is *T*-invariant, we can restrict *T* to *W* to arrive at a new linear mapping

Next, we give a few immediate examples of invariant subspaces.

Certainly *V* itself, and the subspace {0}, are trivially invariant subspaces for every linear operator *T* : *V* → *V*. For certain linear operators there is no *non-trivial* invariant subspace; consider for instance a rotation of a two-dimensional real vector space.

Let * v* be an eigenvector of

*T*, i.e.

*T*

*= λ*

**v***. Then*

**v***W*= span {

*} is*

**v***T*invariant. As a consequence of the fundamental theorem of algebra, every linear operator on a complex finite-dimensional vector space with dimension at least 2 has an eigenvector. Therefore, every such linear operator has a non-trivial invariant subspace. The fact that the complex numbers are algebraically closed is required here. Comparing with the previous example, one can see that the invariant subspaces of a linear transformation are dependent upon the underlying scalar field of

*V*.

An **invariant vector** (fixed point of *T*), other than 0, spans an invariant subspace of dimension 1. An invariant subspace of dimension 1 will be acted on by *T* by a scalar and consists of invariant vectors if and only if that scalar is 1.

As the above examples indicate, the invariant subspaces of a given linear transformation *T* shed light on the structure of *T*. When *V* is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on *V* are characterized (up to similarity) by the Jordan canonical form, which decomposes *V* into invariant subspaces of *T*. Many fundamental questions regarding *T* can be translated to questions about invariant subspaces of *T*.

More generally, invariant subspaces are defined for sets of operators as subspaces invariant for each operator in the set. Let *L*(*V*) denote the algebra of linear transformations on *V*, and Lat(*T*) be the family of subspaces invariant under *T* ∈ *L*(*V*). (The "Lat" notation refers to the fact that Lat(*T*) forms a lattice; see discussion below.) Given a nonempty set Σ ⊂ *L*(*V*), one considers the invariant subspaces invariant under each *T* ∈ Σ. In symbols,

For instance, it is clear that if Σ = *L*(*V*), then Lat(Σ) = { {0}, *V*}.

Given a representation of a group *G* on a vector space *V*, we have a linear transformation *T*(*g*) : *V* → *V* for every element *g* of *G*. If a subspace *W* of *V* is invariant with respect to all these transformations, then it is a subrepresentation and the group *G* acts on *W* in a natural way.

As another example, let *T* ∈ *L*(*V*) and Σ be the algebra generated by {1, *T*}, where 1 is the identity operator. Then Lat(*T*) = Lat(Σ). Because *T* lies in Σ trivially, Lat(Σ) ⊂ Lat(*T*). On the other hand, Σ consists of polynomials in 1 and *T*, therefore the reverse inclusion holds as well.

## Matrix representation

Over a finite dimensional vector space every linear transformation *T* : *V* → *V* can be represented by a matrix once a basis of *V* has been chosen.

Suppose now *W* is a *T* invariant subspace. Pick a basis *C* = {**v**_{1}, ..., **v**_{k}} of *W* and complete it to a basis *B* of *V*. Then, with respect to this basis, the matrix representation of *T* takes the form:

where the upper-left block *T*_{11} is the restriction of *T* to *W*.

In other words, given an invariant subspace *W* of *T*, *V* can be decomposed into the direct sum

Viewing *T* as an operator matrix

it is clear that *T*_{21}: *W* → *W' * must be zero.

Determining whether a given subspace *W* is invariant under *T* is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically. The projection operator *P* onto *W* is defined by
*P*(*w* + *w*') = *w*, where *w* ∈ *W* and *w*' ∈ *W'*. The projection *P* has matrix representation

A straightforward calculation shows that *W* = Ran *P*, the range of *P*, is invariant under *T* if and only if *PTP* = *TP*. In other words, a subspace *W* being an element of Lat(*T*) is equivalent to the corresponding projection satisfying the relation *PTP* = *TP*.

If *P* is a projection (i.e. *P*^{2} = *P*), so is 1 − *P*, where 1 is the identity operator. It follows from the above that *TP* = *PT* if and only if both Ran *P* and Ran(1 − *P*) are invariant under *T*. In that case, *T* has matrix representation

Colloquially, a projection that commutes with *T* "diagonalizes" *T*.

## Invariant subspace problem

The invariant subspace problem concerns the case where *V* is a separable Hilbert space over the complex numbers, of dimension > 1, and *T* is a bounded operator. The problem is to decide whether every such *T* has a non-trivial, closed, invariant subspace. This problem is unsolved as of 2013.

In the more general case where *V* is hypothesized to be a Banach space, there is an example of an operator without an invariant subspace due to Per Enflo (1976). A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read.

## Invariant-subspace lattice

Given a nonempty Σ ⊂ *L*(*V*), the invariant subspaces invariant under each element of Σ form a lattice, sometimes called the **invariant-subspace lattice** of Σ and denoted by Lat(Σ).

The lattice operations are defined in a natural way: for Σ' ⊂ Σ, the *meet* operation is defined by

while the *join* operation is

A minimal element in Lat(Σ) in said to be a **minimal invariant subspace**.

## Fundamental theorem of noncommutative algebra

Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite dimensional complex vector space has a nontrivial invariant subspace, the *fundamental theorem of noncommutative algebra* asserts that Lat(Σ) contains nontrivial elements for certain Σ.

**Theorem (Burnside)** Assume *V* is a complex vector space of finite dimension. For every proper subalgebra Σ of *L*(*V*), Lat(Σ) contains a nontrivial element.

Burnside's theorem is of fundamental importance in linear algebra. One consequence is that every commuting family in *L*(*V*) can be simultaneously upper-triangularized.

A nonempty Σ ⊂ *L*(*V*) is said to be **triangularizable** if there exists a basis {*e*_{1}...*e _{n}*} of

*V*such that

In other words, Σ is triangularizable if there exists a basis such that every element of Σ has an upper-triangular matrix representation in that basis. It follows from Burnside's theorem that every commutative algebra Σ in *L*(*V*) is triangularizable. Hence every commuting family in *L*(*V*) can be simultaneously upper-triangularized.

## Left ideals

If *A* is an algebra, one can define a *left regular representation* Φ on *A*: Φ(*a*)*b* = *ab* is a homomorphism from *A* to *L*(*A*), the algebra of linear transformations on *A*

The invariant subspaces of Φ are precisely the left ideals of *A*. A left ideal *M* of *A* gives a subrepresentation of *A* on *M*.

If *M* is a left ideal of *A*. Consider the quotient vector space *A*/*M*. The left regular representation Φ on *M* now descends to a representation Φ' on *A*/*M*. If [*b*] denotes an equivalence class in *A*/*M*, Φ'(*a*)[*b*] = [*ab*]. The kernel of the representation Φ' is the set {*a* ∈ *A* | *ab* ∈ *M* for all *b*}.

The representation Φ' is irreducible if and only if *M* is a maximal left ideal, since a subspace *V* ⊂ *A*/*M* is an invariant under {Φ'(*a*) | *a* ∈ *A*} if and only if its preimage under the quotient map, *V* + *M*, is a left ideal in *A*.

## Almost-invariant halfspaces

Related to invariant subspaces are so-called almost-invariant-halfspaces (**AIHS's**). A closed subspace of a Banach space is said to be **almost-invariant** under an operator if for some finite-dimensional subspace ; equivalently, is almost-invariant under if there is a finite-rank operator such that , i.e. if is invariant (in the usual sense) under . In this case, the minimum possible dimension of (or rank of ) is called the **defect**.

Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things nontrivial, we say that is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension.

The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if is a complex infinite-dimensional Banach space and then admits an AIHS of defect at most 1. It is not currently known whether the same holds if is a real Banach space. However, some partial results have been established. For instance, any selfadjoint operator on an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space.

## See also

## Bibliography

- Abramovich, Yuri A.; Aliprantis, Charalambos D. (2002).
*An Invitation to Operator Theory*. American Mathematical Society. ISBN 978-0-8218-2146-6.

- Beauzamy, Bernard (1988).
*Introduction to Operator Theory and Invariant Subspaces*. North Holland.

- Enflo, Per; Lomonosov, Victor (2001). "Some aspects of the invariant subspace problem".
*Handbook of the geometry of Banach spaces*.**I**. Amsterdam: North-Holland. pp. 533–559.

- Gohberg, Israel; Lancaster, Peter; Rodman, Leiba (2006).
*Invariant Subspaces of Matrices with Applications*. Classics in Applied Mathematics.**51**(Reprint, with list of errata and new preface, of the 1986 Wiley ed.). Society for Industrial and Applied Mathematics (SIAM). pp. xxii+692. ISBN 978-0-89871-608-5.

- Lyubich, Yurii I. (1988).
*Introduction to the Theory of Banach Representations of Groups*(Translated from the 1985 Russian-language ed.). Kharkov, Ukraine: Birkhäuser Verlag.

- Radjavi, Heydar; Rosenthal, Peter (2003).
*Invariant Subspaces*(Update of 1973 Springer-Verlag ed.). Dover Publications. ISBN 0-486-42822-2.