Betweenness centrality

In graph theory, betweenness centrality is a measure of centrality in a graph based on shortest paths. For every pair of vertices in a connected graph, there exists at least one shortest path between the vertices such that either the number of edges that the path passes through (for unweighted graphs) or the sum of the weights of the edges (for weighted graphs) is minimized. The betweenness centrality for each vertex is the number of these shortest paths that pass through the vertex.

Betweenness centrality was devised as a general measure of centrality:[1] it applies to a wide range of problems in network theory, including problems related to social networks, biology, transport and scientific cooperation. Although earlier authors have intuitively described centrality as based on betweenness, Freeman (1977) gave the first formal definition of betweenness centrality.

Betweenness centrality finds wide application in network theory; it represents the degree to which nodes stand between each other. For example, in a telecommunications network, a node with higher betweenness centrality would have more control over the network, because more information will pass through that node.

Definition

The betweenness centrality of a node ${\displaystyle v}$ is given by the expression:

${\displaystyle g(v)=\sum _{s\neq v\neq t}{\frac {\sigma _{st}(v)}{\sigma _{st}}}}$

where ${\displaystyle \sigma _{st}}$ is the total number of shortest paths from node ${\displaystyle s}$ to node ${\displaystyle t}$ and ${\displaystyle \sigma _{st}(v)}$ is the number of those paths that pass through ${\displaystyle v}$.

Note that the betweenness centrality of a node scales with the number of pairs of nodes as suggested by the summation indices. Therefore, the calculation may be rescaled by dividing through by the number of pairs of nodes not including ${\displaystyle v}$, so that ${\displaystyle g\in [0,1]}$. The division is done by ${\displaystyle (N-1)(N-2)}$ for directed graphs and ${\displaystyle (N-1)(N-2)/2}$ for undirected graphs, where ${\displaystyle N}$ is the number of nodes in the giant component. Note that this scales for the highest possible value, where one node is crossed by every single shortest path. This is often not the case, and a normalization can be performed without a loss of precision

${\displaystyle {\mbox{normal}}(g(v))={\frac {g(v)-\min(g)}{\max(g)-\min(g)}}}$

which results in:

${\displaystyle \max(normal)=1}$
${\displaystyle \min(normal)=0}$

Note that this will always be a scaling from a smaller range into a larger range, so no precision is lost.

Weighted networks

In a weighted network the links connecting the nodes are no longer treated as binary interactions, but are weighted in proportion to their capacity, influence, frequency, etc., which adds another dimension of heterogeneity within the network beyond the topological effects. A node's strength in a weighted network is given by the sum of the weights of its adjacent edges.

${\displaystyle s_{i}=\sum _{j=1}^{N}a_{ij}w_{ij}}$

With ${\displaystyle a_{ij}}$ and ${\displaystyle w_{ij}}$ being adjacency and weight matrices between nodes ${\displaystyle i}$ and ${\displaystyle j}$, respectively. Analogous to the power law distribution of degree found in scale free networks, the strength of a given node follows a power law distribution as well.

${\displaystyle s(k)\approx k^{\beta }}$

A study of the average value ${\displaystyle s(b)}$ of the strength for vertices with betweenness ${\displaystyle b}$ shows that the functional behavior can be approximated by a scaling form [2]

${\displaystyle s(b)\approx b^{\alpha }}$

Percolation centrality

Percolation centrality is defined for a given node, at a given time, as the proportion of ‘percolated paths’ that go through that node. A ‘percolated path’ is a shortest path between a pair of nodes, where the source node is percolated (e.g., infected). The target node can be percolated or non-percolated, or in a partially percolated state.

${\displaystyle PC^{t}(v)={\frac {1}{N-2}}\sum _{s\neq v\neq r}{\frac {\sigma _{sr}(v)}{\sigma _{sr}}}{\frac {{x^{t}}_{s}}{{\sum {[{x^{t}}_{i}}]}-{x^{t}}_{v}}}}$

where ${\displaystyle \sigma _{sr}}$ is total number of shortest paths from node ${\displaystyle s}$ to node ${\displaystyle r}$ and ${\displaystyle \sigma _{sr}(v)}$ is the number of those paths that pass through ${\displaystyle v}$. The percolation state of the node ${\displaystyle i}$ at time ${\displaystyle t}$ is denoted by ${\displaystyle {x^{t}}_{i}}$ and two special cases are when ${\displaystyle {x^{t}}_{i}=0}$ which indicates a non-percolated state at time ${\displaystyle t}$ whereas when ${\displaystyle {x^{t}}_{i}=1}$ which indicates a fully percolated state at time ${\displaystyle t}$. The values in between indicate partially percolated states ( e.g., in a network of townships, this would be the percentage of people infected in that town).

The attached weights to the percolation paths depend on the percolation levels assigned to the source nodes, based on the premise that the higher the percolation level of a source node is, the more important are the paths that originate from that node. Nodes which lie on shortest paths originating from highly percolated nodes are therefore potentially more important to the percolation. The definition of PC may also be extended to include target node weights as well. Percolation centrality calculations run in ${\displaystyle O(NM)}$ time with an efficient implementation adopted from Brandes' fast algorithm and if the calculation needs to consider target nodes weights, the worst case time is ${\displaystyle O(N^{3})}$.

Algorithms

Calculating the betweenness and closeness centralities of all the vertices in a graph involves calculating the shortest paths between all pairs of vertices on a graph, which takes ${\displaystyle \Theta (|V|^{3})}$ time with the Floyd–Warshall algorithm, modified to not only find one but count all shortest paths between two nodes. On a sparse graph, Johnson's algorithm or Brandes' algorithm may be more efficient, both taking ${\displaystyle O(|V|^{2}\log |V|+|V||E|)}$ time. On unweighted graphs, calculating betweenness centrality takes ${\displaystyle O(|V||E|)}$ time using Brandes' algorithm.[4]

In calculating betweenness and closeness centralities of all vertices in a graph, it is assumed that graphs are undirected and connected with the allowance of loops and multiple edges. When specifically dealing with network graphs, often graphs are without loops or multiple edges to maintain simple relationships (where edges represent connections between two people or vertices). In this case, using Brandes' algorithm will divide final centrality scores by 2 to account for each shortest path being counted twice.[5]

Another algorithm generalizes the Freeman's betweenness computed on geodesics and Newman's betweenness computed on all paths, by introducing a hyper-parameter controlling the trade-off between exploration and exploitation. The time complexity is the number of edges times the number of nodes in the graph.[6]

The concept of centrality was extended to a group level as well.[7] Group betweenness centrality shows the proportion of geodesics connecting pairs of non-group members that pass through a group of nodes. Brandes' algorithm for computing the betweenness centrality of all vertices was modified to compute the group betweenness centrality of one group of nodes with the same asymptotic running time.[7]

Betweenness centrality is related to a network's connectivity, in so much as high betweenness vertices have the potential to disconnect graphs if removed (see cut set) .

Routing Betweeness centrality generalizes the Beetweness centrality to be applied to any loop-less simple path definition scheme, beyond only the shortest path criteria. Dolev, Shlomi; Elovici, Yuval; Puzis, Rami (2010), "Routing betweenness centrality", J. ACM, 57 (4): 25:1–25:27, doi:10.1145/1734213.1734219