# Incomplete LU factorization

In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner.

## Introduction

Consider a sparse linear system ${\displaystyle Ax=b}$. These are often solved by computing the factorization ${\displaystyle A=LU}$, with L lower unitriangular and U upper triangular. One then solves ${\displaystyle Ly=b}$, ${\displaystyle Ux=y}$, which can be done efficiently because the matrices are triangular.

For a typical sparse matrix, the LU factors can be much less sparse than the original matrix a phenomenon called fill-in. The memory requirements for using a direct solver can then become a bottleneck in solving linear systems. One can combat this problem by using fill-reducing reorderings of the matrix's unknowns, such as the Cuthill-McKee ordering.

An incomplete factorization instead seeks triangular matrices L, U such that ${\displaystyle A\approx LU}$ rather than ${\displaystyle A=LU}$. Solving for ${\displaystyle LUx=b}$ can be done quickly but does not yield the exact solution to ${\displaystyle Ax=b}$. So, we instead use the matrix ${\displaystyle M=LU}$ as a preconditioner in another iterative solution algorithm such as the conjugate gradient method or GMRES.

## Definition

For a given matrix ${\displaystyle A\in \mathbb {R} ^{n\times n}}$ one defines the graph ${\displaystyle G(A)}$ as

${\displaystyle G(A):=\left\lbrace (i,j)\in \mathbb {N} ^{2}:A_{ij}\neq 0\right\rbrace \,,}$

which is used to define the conditions a sparsity patterns ${\displaystyle S}$ needs to fulfill

${\displaystyle S\subset \left\lbrace 1,\dots ,n\right\rbrace ^{2}\,,\quad \left\lbrace (i,i):1\leq i\leq n\right\rbrace \subset S\,,\quad G(A)\subset S\,.}$

A decomposition of the form ${\displaystyle A=LU-R}$ where the following hold

• ${\displaystyle L\in \mathbb {R} ^{n\times n}}$ is a lower unitriangular matrix
• ${\displaystyle U\in \mathbb {R} ^{n\times n}}$ is an upper triangular matrix
• ${\displaystyle L,U}$ are zero outside of the sparsity pattern: ${\displaystyle L_{ij}=U_{ij}=0\quad \forall \;(i,j)\notin S}$
• ${\displaystyle R\in \mathbb {R} ^{n\times n}}$ is zero within the sparsity pattern: ${\displaystyle R_{ij}=0\quad \forall \;(i,j)\in S}$

is called an incomplete LU decomposition (w.r.t. the sparsity pattern ${\displaystyle S}$).

The sparsity pattern of L and U is often chosen to be the same as the sparsity pattern of the original matrix A. If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of L and U. This preconditioner is called ILU(0).

## Stability

Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst[1].

Let ${\displaystyle A}$ be an M-matrix, the (complete) LU decomposition given by ${\displaystyle A={\hat {L}}{\hat {U}}}$, and the ILU by ${\displaystyle A=LU-R}$. Then

${\displaystyle |L_{ij}|\leq |{\hat {L}}_{ij}|\quad \forall \;i,j}$

holds. Thus, the ILU is at least as stable as the (complete) LU decomposition.

## Generalizations

One can obtain a more accurate preconditioner by allowing some level of extra fill in the factorization. A common choice is to use the sparsity pattern of A2 instead of A; this matrix is appreciably more dense than A, but still sparse over all. This preconditioner is called ILU(1). One can then generalize this procedure; the ILU(k) preconditioner of a matrix A is the incomplete LU factorization with the sparsity pattern of the matrix Ak+1.

More accurate ILU preconditioners require more memory, to such an extent that eventually the running time of the algorithm increases even though the total number of iterations decreases. Consequently, there is a cost/accuracy trade-off that users must evaluate, typically on a case-by-case basis depending on the family of linear systems to be solved.

The ILU factorization can be performed as a fixed-point iteration in a highly parallel way.[2]