Laplacian matrix

In the mathematical field of graph theory, the Laplacian matrix, sometimes called admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. The Laplacian matrix can be used to find many useful properties of a graph. Together with Kirchhoff's theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the second smallest eigenvalue of its Laplacian by Cheeger's inequality. It can also be used to construct low dimensional embeddings, which can be useful for a variety of machine learning applications.

Definition

Laplacian matrix for simple graphs

Given a simple graph ${\displaystyle G}$ with ${\displaystyle n}$ vertices, its Laplacian matrix ${\textstyle L_{n\times n}}$ is defined as:[1]

${\displaystyle L=D-A,}$

where D is the degree matrix and A is the adjacency matrix of the graph. Since ${\textstyle G}$ is a simple graph, ${\textstyle A}$ only contains 1s or 0s and its diagonal elements are all 0s.

In the case of directed graphs, either the indegree or outdegree might be used, depending on the application.

The elements of ${\textstyle L}$ are given by

${\displaystyle L_{i,j}:={\begin{cases}\deg(v_{i})&{\mbox{if}}\ i=j\\-1&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}\end{cases}}}$

where ${\displaystyle \operatorname {deg} (v_{i})}$ is the degree of the vertex ${\displaystyle i}$.

Symmetric normalized Laplacian

The symmetric normalized Laplacian matrix is defined as:[1]

${\displaystyle L^{\text{sym}}:=D^{-{\frac {1}{2}}}LD^{-{\frac {1}{2}}}=I-D^{-{\frac {1}{2}}}AD^{-{\frac {1}{2}}}}$,

The elements of ${\textstyle L^{\text{sym}}}$ are given by

${\displaystyle L_{i,j}^{\text{sym}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\sqrt {\deg(v_{i})\deg(v_{j})}}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}$

Random walk normalized Laplacian

The random-walk normalized Laplacian matrix is defined as:

${\displaystyle L^{\text{rw}}:=D^{-1}L=I-D^{-1}A}$

The elements of ${\textstyle L^{\text{rw}}}$ are given by

${\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}$

Generalized Laplacian

The generalized Laplacian ${\displaystyle Q}$ is defined as[2]:

${\displaystyle {\begin{cases}Q_{i,j}<0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\Q_{i,j}=0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is not adjacent to }}v_{j}\\{\mbox{any number}}&{\mbox{otherwise}}.\end{cases}}}$

Notice the ordinary Laplacian is a generalized Laplacian.

Example

Here is a simple example of a labelled, undirected graph and its Laplacian matrix.

Labelled graph Degree matrix Adjacency matrix Laplacian matrix
${\textstyle \left({\begin{array}{rrrrrr}2&0&0&0&0&0\\0&3&0&0&0&0\\0&0&2&0&0&0\\0&0&0&3&0&0\\0&0&0&0&3&0\\0&0&0&0&0&1\\\end{array}}\right)}$ ${\textstyle \left({\begin{array}{rrrrrr}0&1&0&0&1&0\\1&0&1&0&1&0\\0&1&0&1&0&0\\0&0&1&0&1&1\\1&1&0&1&0&0\\0&0&0&1&0&0\\\end{array}}\right)}$ ${\textstyle \left({\begin{array}{rrrrrr}2&-1&0&0&-1&0\\-1&3&-1&0&-1&0\\0&-1&2&-1&0&0\\0&0&-1&3&-1&-1\\-1&-1&0&-1&3&0\\0&0&0&-1&0&1\\\end{array}}\right)}$

Properties

For an (undirected) graph G and its Laplacian matrix L with eigenvalues ${\textstyle \lambda _{0}\leq \lambda _{1}\leq \cdots \leq \lambda _{n-1}}$:

• L is symmetric.
• L is positive-semidefinite (that is ${\textstyle \lambda _{i}\geq 0}$ for all ${\textstyle i}$). This is verified in the incidence matrix section (below). This can also be seen from the fact that the Laplacian is symmetric and diagonally dominant.
• L is an M-matrix (its off-diagonal entries are nonpositive, yet the real parts of its eigenvalues are nonnegative).
• Every row sum and column sum of L is zero. Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor.
• In consequence, ${\textstyle \lambda _{0}=0}$, because the vector ${\textstyle \mathbf {v} _{0}=(1,1,\dots ,1)}$ satisfies ${\textstyle L\mathbf {v} _{0}=\mathbf {0} .}$ This also implies that the Laplacian matrix is singular.
• The number of connected components in the graph is the dimension of the nullspace of the Laplacian and the algebraic multiplicity of the 0 eigenvalue.
• The smallest non-zero eigenvalue of L is called the spectral gap.
• The second smallest eigenvalue of L (could be zero) is the algebraic connectivity (or Fiedler value) of G and approximates the sparsest cut of a graph.
• The Laplacian is an operator on the n-dimensional vector space of functions ${\textstyle f:V\to \mathbb {R} }$, where ${\textstyle V}$ is the vertex set of G, and ${\textstyle n=|V|}$.
• When G is k-regular, the normalized Laplacian is: ${\textstyle {\mathcal {L}}={\tfrac {1}{k}}L=I-{\tfrac {1}{k}}A}$, where A is the adjacency matrix and I is an identity matrix.
• For a graph with multiple connected components, L is a block diagonal matrix, where each block is the respective Laplacian matrix for each component, possibly after reordering the vertices (i.e. L is permutation-similar to a block diagonal matrix).
• The trace of the Laplacian matrix L is equal to ${\textstyle 2m}$ where ${\textstyle m}$ is the number of edges of the considered graph.

Incidence matrix

Define an ${\textstyle |e|\times |v|}$ oriented incidence matrix M with element Mev for edge e (connecting vertex i and j, with i > j) and vertex v given by

${\displaystyle M_{ev}=\left\{{\begin{array}{rl}1,&{\text{if }}v=i\\-1,&{\text{if }}v=j\\0,&{\text{otherwise}}.\end{array}}\right.}$

Then the Laplacian matrix L satisfies

${\displaystyle L=M^{\textsf {T}}M\,,}$

where ${\textstyle M^{\textsf {T}}}$ is the matrix transpose of M.

Now consider an eigendecomposition of ${\textstyle L}$, with unit-norm eigenvectors ${\textstyle \mathbf {v} _{i}}$ and corresponding eigenvalues ${\textstyle \lambda _{i}}$:

{\displaystyle {\begin{aligned}\lambda _{i}&=\mathbf {v} _{i}^{\textsf {T}}L\mathbf {v} _{i}\\&=\mathbf {v} _{i}^{\textsf {T}}M^{\textsf {T}}M\mathbf {v} _{i}\\&=\left(M\mathbf {v} _{i}\right)^{\textsf {T}}\left(M\mathbf {v} _{i}\right).\\\end{aligned}}}

Because ${\textstyle \lambda _{i}}$ can be written as the inner product of the vector ${\textstyle M\mathbf {v} _{i}}$ with itself, this shows that ${\textstyle \lambda _{i}\geq 0}$ and so the eigenvalues of ${\textstyle L}$ are all non-negative.

Deformed Laplacian

The deformed Laplacian is commonly defined as

${\displaystyle \Delta (s)=I-sA+s^{2}(D-I)}$

where I is the unit matrix, A is the adjacency matrix, and D is the degree matrix, and s is a (complex-valued) number. Note that the standard Laplacian is just ${\textstyle \Delta (1)}$.[3]

Signless Laplacian

The signless Laplacian is defined as

${\displaystyle Q=D+A}$

where ${\displaystyle D}$ is the degree matrix, and ${\displaystyle A}$ is the adjacency matrix[4]. Like the signed Laplacian ${\displaystyle L}$, the signless Laplacian ${\displaystyle Q}$ also is positive semi-definite as it can be factored as

${\displaystyle Q=RR^{T}}$

where ${\textstyle R}$ is the incidence matrix. ${\displaystyle Q}$ has a 0-eigenvector if and only if it has a bipartite connected component other than isolated vertices. This can be shown as

${\displaystyle \mathbf {x} ^{T}Q\mathbf {x} =\mathbf {x} ^{T}RR^{T}\mathbf {x} \implies R^{T}\mathbf {x} =\mathbf {0} .}$

This has a solution where ${\displaystyle \mathbf {x} \neq \mathbf {0} }$ if and only if the graph has a bipartite connected component.

Symmetric normalized Laplacian

The (symmetric) normalized Laplacian is defined as

${\displaystyle L^{\text{sym}}:=D^{-{\frac {1}{2}}}LD^{-{\frac {1}{2}}}=I-D^{-{\frac {1}{2}}}AD^{-{\frac {1}{2}}}}$

where L is the (unnormalized) Laplacian, A is the adjacency matrix and D is the degree matrix. Since the degree matrix D is diagonal and positive, its reciprocal square root ${\textstyle D^{-{\frac {1}{2}}}}$ is just the diagonal matrix whose diagonal entries are the reciprocals of the positive square roots of the diagonal entries of D. The symmetric normalized Laplacian is a symmetric matrix.

One has: ${\textstyle L^{\text{sym}}=SS^{*}}$, where S is the matrix whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge e = {u, v} has an entry ${\textstyle {\frac {1}{\sqrt {d_{u}}}}}$ in the row corresponding to u, an entry ${\textstyle -{\frac {1}{\sqrt {d_{v}}}}}$ in the row corresponding to v, and has 0 entries elsewhere. (Note: ${\textstyle S^{*}}$ denotes the transpose of S).

All eigenvalues of the normalized Laplacian are real and non-negative. We can see this as follows. Since ${\textstyle L^{\text{sym}}}$ is symmetric, its eigenvalues are real. They are also non-negative: consider an eigenvector ${\textstyle g}$ of ${\textstyle L^{\text{sym}}}$ with eigenvalue λ and suppose ${\textstyle g=D^{\frac {1}{2}}f}$. (We can consider g and f as real functions on the vertices v.) Then:

${\displaystyle \lambda \ =\ {\frac {\langle g,L^{\text{sym}}g\rangle }{\langle g,g\rangle }}\ =\ {\frac {\left\langle g,D^{-{\frac {1}{2}}}LD^{-{\frac {1}{2}}}g\right\rangle }{\langle g,g\rangle }}\ =\ {\frac {\langle f,Lf\rangle }{\left\langle D^{\frac {1}{2}}f,D^{\frac {1}{2}}f\right\rangle }}\ =\ {\frac {\sum _{u\sim v}(f(u)-f(v))^{2}}{\sum _{v}f(v)^{2}d_{v}}}\ \geq \ 0,}$

where we use the inner product ${\textstyle \langle f,g\rangle =\sum _{v}f(v)g(v)}$, a sum over all vertices v, and ${\textstyle \sum _{u\sim v}}$ denotes the sum over all unordered pairs of adjacent vertices {u,v}. The quantity ${\textstyle \sum _{u,v}(f(u)-f(v))^{2}}$ is called the Dirichlet sum of f, whereas the expression ${\textstyle {\frac {\left\langle g,L^{\text{sym}}g\right\rangle }{\langle g,g\rangle }}}$ is called the Rayleigh quotient of g.

Let 1 be the function which assumes the value 1 on each vertex. Then ${\textstyle D^{\frac {1}{2}}1}$ is an eigenfunction of ${\textstyle L^{\text{sym}}}$ with eigenvalue 0.[5]

In fact, the eigenvalues of the normalized symmetric Laplacian satisfy 0 = μ0 ≤ … ≤ μn−1 ≤ 2. These eigenvalues (known as the spectrum of the normalized Laplacian) relate well to other graph invariants for general graphs.[6]

Random walk normalized Laplacian

The random walk normalized Laplacian is defined as

${\displaystyle L^{\text{rw}}:=D^{-1}L}$

where D is the degree matrix. Since the degree matrix D is diagonal, its inverse ${\textstyle D^{-1}}$ is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding positive diagonal entries of D.

For the isolated vertices (those with degree 0), a common choice is to set the corresponding element ${\textstyle L_{i,i}^{\text{rw}}}$ to 0.

This convention results in a nice property that the multiplicity of the eigenvalue 0 is equal to the number of connected components in the graph.

The matrix elements of ${\textstyle L^{\text{rw}}}$ are given by

${\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if}}\ i=j\ {\mbox{and}}\ \deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}$

The name of the random-walk normalized Laplacian comes from the fact that this matrix is ${\textstyle L^{\text{rw}}=I-P}$, where ${\textstyle P=D^{-1}A}$ is simply the transition matrix of a random walker on the graph. For example, let ${\textstyle e_{i}}$ denote the i-th standard basis vector. Then ${\textstyle x=e_{i}P}$ is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex ${\textstyle i}$; i.e., ${\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)}$. More generally, if the vector ${\textstyle x}$ is a probability distribution of the location of a random walker on the vertices of the graph, then ${\textstyle x'=xP^{t}}$ is the probability distribution of the walker after ${\textstyle t}$ steps.

One can check that

${\displaystyle L^{\text{rw}}=I-D^{-{\frac {1}{2}}}\left(I-L^{\text{sym}}\right)D^{\frac {1}{2}}}$,

i.e., ${\textstyle L^{\text{rw}}}$ is similar to the normalized Laplacian ${\textstyle L^{\text{sym}}}$. For this reason, even if ${\textstyle L^{\text{rw}}}$ is in general not hermitian, it has real eigenvalues. Indeed, its eigenvalues agree with those of ${\textstyle L^{\text{sym}}}$ (which is hermitian).

Graphs

As an aside about random walks on graphs, consider a simple undirected graph. Consider the probability that the walker is at the vertex i at time t, given the probability distribution that he was at vertex j at time t − 1 (assuming a uniform chance of taking a step along any of the edges attached to a given vertex):

${\displaystyle p_{i}(t)=\sum _{j}{\frac {A_{ij}}{\deg \left(v_{j}\right)}}p_{j}(t-1),}$

or in matrix-vector notation:

${\displaystyle p(t)=AD^{-1}p(t-1).}$

(Equilibrium, which sets in as ${\textstyle t\rightarrow \infty }$, is defined by ${\textstyle p=AD^{-1}p}$.)

We can rewrite this relation as

${\displaystyle D^{-{\frac {1}{2}}}p(t)=\left[D^{-{\frac {1}{2}}}AD^{-{\frac {1}{2}}}\right]D^{-{\frac {1}{2}}}p(t-1).}$

${\textstyle A_{\text{reduced}}\equiv D^{-{\frac {1}{2}}}AD^{-{\frac {1}{2}}}}$ is a symmetric matrix called the reduced adjacency matrix. So, taking steps on this random walk requires taking powers of ${\textstyle A_{\text{reduced}}}$, which is a simple operation because ${\textstyle A_{\text{reduced}}}$ is symmetric.

Interpretation as the discrete Laplace operator

The Laplacian matrix can be interpreted as a matrix representation of a particular case of the discrete Laplace operator. Such an interpretation allows one, e.g., to generalise the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.

Suppose ${\textstyle \phi }$ describes a heat distribution across a graph, where ${\textstyle \phi _{i}}$ is the heat at vertex ${\textstyle i}$. According to Newton's law of cooling, the heat transferred between nodes ${\textstyle i}$ and ${\textstyle j}$ is proportional to ${\textstyle \phi _{i}-\phi _{j}}$ if nodes ${\textstyle i}$ and ${\textstyle j}$ are connected (if they are not connected, no heat is transferred). Then, for heat capacity ${\textstyle k}$,

{\displaystyle {\begin{aligned}{\frac {d\phi _{i}}{dt}}&=-k\sum _{j}A_{ij}\left(\phi _{i}-\phi _{j}\right)\\&=-k\left(\phi _{i}\sum _{j}A_{ij}-\sum _{j}A_{ij}\phi _{j}\right)\\&=-k\left(\phi _{i}\ \deg(v_{i})-\sum _{j}A_{ij}\phi _{j}\right)\\&=-k\sum _{j}\left(\delta _{ij}\ \deg(v_{i})-A_{ij}\right)\phi _{j}\\&=-k\sum _{j}\left(\ell _{ij}\right)\phi _{j}.\end{aligned}}}

In matrix-vector notation,

{\displaystyle {\begin{aligned}{\frac {d\phi }{dt}}&=-k(D-A)\phi \\&=-kL\phi ,\end{aligned}}}

which gives

${\displaystyle {\frac {d\phi }{dt}}+kL\phi =0.}$

Notice that this equation takes the same form as the heat equation, where the matrix −L is replacing the Laplacian operator ${\textstyle \nabla ^{2}}$; hence, the "graph Laplacian".

To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write ${\textstyle \phi }$ as a linear combination of eigenvectors ${\textstyle \mathbf {v} _{i}}$ of L (so that ${\textstyle L\mathbf {v} _{i}=\lambda _{i}\mathbf {v} _{i}}$), with time-dependent ${\textstyle \phi =\sum _{i}c_{i}\mathbf {v} _{i}.}$

Plugging into the original expression (note that we will use the fact that because L is a symmetric matrix, its unit-norm eigenvectors ${\textstyle \mathbf {v} _{i}}$ are orthogonal):

{\displaystyle {\begin{aligned}{\frac {d\left(\sum _{i}c_{i}\mathbf {v} _{i}\right)}{dt}}+kL\left(\sum _{i}c_{i}\mathbf {v} _{i}\right)&=0\\\sum _{i}\left[{\frac {dc_{i}}{dt}}\mathbf {v} _{i}+kc_{i}L\mathbf {v} _{i}\right]&=\\\sum _{i}\left[{\frac {dc_{i}}{dt}}\mathbf {v} _{i}+kc_{i}\lambda _{i}\mathbf {v} _{i}\right]&=\\{\frac {dc_{i}}{dt}}+k\lambda _{i}c_{i}&=0,\\\end{aligned}}}

whose solution is

${\displaystyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}.}$

As shown before, the eigenvalues ${\textstyle \lambda _{i}}$ of L are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given ${\textstyle \lambda _{i}}$ and the initial condition ${\textstyle c_{i}(0)}$, the solution at any time t can be found.[7]

To find ${\textstyle c_{i}(0)}$ for each ${\textstyle i}$ in terms of the overall initial condition ${\textstyle \phi (0)}$, simply project ${\textstyle \phi (0)}$ onto the unit-norm eigenvectors ${\textstyle \mathbf {v} _{i}}$;

${\displaystyle c_{i}(0)=\left\langle \phi (0),\mathbf {v} _{i}\right\rangle }$.

In the case of undirected graphs, this works because ${\textstyle L}$ is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of ${\textstyle L}$ is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other.

Equilibrium behavior

To understand ${\textstyle \lim _{t\to \infty }\phi (t)}$, note that the only terms ${\textstyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}}$ that remain are those where ${\textstyle \lambda _{i}=0}$, since

${\displaystyle \lim _{t\to \infty }e^{-k\lambda _{i}t}=\left\{{\begin{array}{rlr}0&{\text{if}}&\lambda _{i}>0\\1&{\text{if}}&\lambda _{i}=0\end{array}}\right\}}$

In other words, the equilibrium state of the system is determined completely by the kernel of ${\textstyle L}$.

Since by definition, ${\textstyle \sum _{j}L_{ij}=0}$, the vector ${\textstyle \mathbf {v} ^{1}}$ of all ones is in the kernel. Note also that if there are ${\textstyle k}$ disjoint connected components in the graph, then this vector of all ones can be split into the sum of ${\textstyle k}$ independent ${\textstyle \lambda =0}$ eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere.

The consequence of this is that for a given initial condition ${\textstyle c(0)}$ for a graph with ${\textstyle N}$ vertices

${\displaystyle \lim _{t\to \infty }\phi (t)=\left\langle c(0),\mathbf {v^{1}} \right\rangle \mathbf {v^{1}} }$

where

${\displaystyle \mathbf {v^{1}} ={\frac {1}{\sqrt {N}}}[1,1,...,1]}$

For each element ${\textstyle \phi _{j}}$ of ${\textstyle \phi }$, i.e. for each vertex ${\textstyle j}$ in the graph, it can be rewritten as

${\displaystyle \lim _{t\to \infty }\phi _{j}(t)={\frac {1}{N}}\sum _{i=1}^{N}c_{i}(0)}$.

In other words, at steady state, the value of ${\textstyle \phi }$ converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other.

Example of the operator on a grid

This section shows an example of a function ${\textstyle \phi }$ diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid.

The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions.

N = 20;%The number of pixels along a dimension of the image
A = zeros(N, N);%The image

%Use 8 neighbors, and fill in the adjacency matrix
dx = [-1, 0, 1, -1, 1, -1, 0, 1];
dy = [-1, -1, -1, 0, 0, 1, 1, 1];
for x = 1:N
for y = 1:N
index = (x-1)*N + y;
for ne = 1:length(dx)
newx = x + dx(ne);
newy = y + dy(ne);
if newx > 0 && newx <= N && newy > 0 && newy <= N
index2 = (newx-1)*N + newy;
end
end
end
end

%%%BELOW IS THE KEY CODE THAT COMPUTES THE SOLUTION TO THE DIFFERENTIAL
%%%EQUATION
Deg = diag(sum(Adj, 2));%Compute the degree matrix
L = Deg - Adj;%Compute the laplacian matrix in terms of the degree and adjacency matrices
[V, D] = eig(L);%Compute the eigenvalues/vectors of the laplacian matrix
D = diag(D);

%Initial condition (place a few large positive values around and
%make everything else zero)
C0 = zeros(N, N);
C0(2:5, 2:5) = 5;
C0(10:15, 10:15) = 10;
C0(2:5, 8:13) = 7;
C0 = C0(:);

C0V = V'*C0;%Transform the initial condition into the coordinate system
%of the eigenvectors
for t = 0:0.05:5
%Loop through times and decay each initial component
Phi = C0V.*exp(-D*t);%Exponential decay for each component
Phi = V*Phi;%Transform from eigenvector coordinate system to original coordinate system
Phi = reshape(Phi, N, N);
%Display the results and write to GIF file
imagesc(Phi);
caxis([0, 10]);
title(sprintf('Diffusion t = %3f', t));
frame = getframe(1);
im = frame2im(frame);
[imind, cm] = rgb2ind(im, 256);
if t == 0
imwrite(imind, cm, 'out.gif', 'gif', 'Loopcount', inf, 'DelayTime', 0.1);
else
imwrite(imind, cm, 'out.gif', 'gif', 'WriteMode', 'append', 'DelayTime', 0.1);
end
end


Approximation to the negative continuous Laplacian

The graph Laplacian matrix can be further viewed as a matrix form of an approximation to the (positive semi-definite) Laplacian operator obtained by the finite difference method.[8] In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition, i.e., free boundary.

Directed multigraphs

An analogue of the Laplacian matrix can be defined for directed multigraphs.[9] In this case the Laplacian matrix L is defined as

${\displaystyle L=D-A}$

where D is a diagonal matrix with Di,i equal to the outdegree of vertex i and A is a matrix with Ai,j equal to the number of edges from i to j (including loops).

References

1. Weisstein, Eric W. "Laplacian Matrix". MathWorld.
2. Godsil, C.; Royle, G. (2001). Algebraic Graph Theory, Graduate Texts in Mathematics. Springer-Verlag.
3. Morbidi, F. (2013). "The Deformed Consensus Protocol" (PDF). Automatica. 49 (10): 3049–3055. doi:10.1016/j.automatica.2013.07.006.
4. Cvetković, Dragoš; Simić, Slobodan K. (2010). "TOWARDS A SPECTRAL THEORY OF GRAPHS BASED ON THE SIGNLESS LAPLACIAN, III". Applicable Analysis and Discrete Mathematics. 4 (1): 156–166. ISSN 1452-8630.
5. Chung, Fan R. K. (1997). Spectral graph theory (Repr. with corr., 2. [pr.] ed.). Providence, RI: American Math. Soc. ISBN 978-0-8218-0315-8.
6. Chung, Fan (1997) [1992]. Spectral Graph Theory. American Mathematical Society. ISBN 978-0821803158.
7. Newman, Mark (2010). Networks: An Introduction. Oxford University Press. ISBN 978-0199206650.
8. Smola, Alexander J.; Kondor, Risi (2003), "Kernels and regularization on graphs", Learning Theory and Kernel Machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24–27, 2003, Proceedings, Lecture Notes in Computer Science, 2777, Springer, pp. 144–158, CiteSeerX 10.1.1.3.7020, doi:10.1007/978-3-540-45167-9_12, ISBN 978-3-540-40720-1.
9. Chaiken, S.; Kleitman, D. (1978). "Matrix Tree Theorems". Journal of Combinatorial Theory, Series A. 24 (3): 377–381. doi:10.1016/0097-3165(78)90067-5. ISSN 0097-3165.
• T. Sunada, "Discrete geometric analysis", Proceedings of Symposia in Pure Mathematics, (ed. by P. Exner, J. P. Keating, P. Kuchment, T. Sunada, A. Teplyaev), 77 (2008), 51–86.
• B. Bollobás, Modern Graph Theory, Springer-Verlag (1998, corrected ed. 2013), ISBN 0-387-98488-7, Chapters II.3 (Vector Spaces and Matrices Associated with Graphs), VIII.2 (The Adjacency Matrix and the Laplacian), IX.2 (Electrical Networks and Random Walks).