next up previous contents
Next: Overlapping Sets of Sources Up: ENSEMBLES OF REARRANGEMENT PATHWAYS Previous: Graph Transformation Method   Contents


Applications to Sparse Random Graphs

Algorithm B.3 could easily be adapted to use adjacency-lists-based data structures [154], resulting in a faster execution and lower storage requirements for sparse graphs. We have implemented [274] a sparse-optimised version of Algorithm B.3 because the graph representations of the Markov chains of interest in the present work are sparse [201].

The algorithm for detaching a single intermediate node from an arbitrary graph stored in a sparse-optimised format is given in Algorithm B.4. Having chosen the node to be removed, $ \gamma $, all the neighbours $ \beta \in Adj[\gamma]$ are analysed in turn, as follows. Lines 3-9 of Algorithm B.4 find node $ \gamma $ in the adjacency list of node $ \beta $. If $ \beta $ is not a sink, lines 11-34 are executed to modify the adjacency list of node $ \beta $: lines 13-14 delete node $ \gamma $ from the adjacency list of $ \beta $, while lines 15-30 make all the neighbours $ \alpha \in Adj[\gamma]\ominus{\beta}$ of node $ \gamma $ the neighbours of $ \beta $. The symbol $ \ominus$ denotes the union minus the intersection of two sets, otherwise known as the symmetric difference.If the edge $ \beta \rightarrow \alpha$ already existed only the branching probability is changed (line 21). Otherwise, a new edge is created and the adjacency and branching probability lists are modified accordingly (line 26 and line 27, respectively). Finally, the branching probabilities of node $ \beta $ are renormalised (lines 31-33) and the waiting time for node $ \beta $ is increased (line 34).

Algorithm B.4 is invoked iteratively for every node that is neither a source nor a sink to yield a graph that is composed of source nodes and sink nodes only. Then the procedure described in Section 4.4 for disconnection of source nodes (lines 17-34 of Algorithm B.3) is applied to obtain the mean escape times for every source node. The sparse-optimised version of the second part of Algorithm B.3 is straightforward and is therefore omitted here for brevity.

The running time of Algorithm B.4 is $ \mathcal{O}(d_c \sum_{i \in Adj[c]} d_i)$, where $ d_k$ is the degree of node $ k$. For the case when all the nodes in a graph have approximately the same degree, $ d$, the complexity is $ \mathcal{O}(d^3)$. Therefore, if there are $ N$ intermediate nodes to be detached and $ d$ is of the same order of magnitude as $ N$, the cost of detaching $ N$ nodes is $ \mathcal{O}(N^4)$. The asymptotic bound is worse than that of Algorithm B.3 because of the searches through adjacency lists (lines 3-9 and lines 19-24). If $ d$ is sufficiently small the algorithm based on adjacency lists is faster.

After each invocation of Algorithm B.4 the number of nodes is always decreased by one. The number of edges, however, can increase or decrease depending on the in- and out-degree of the node to be removed and the connectivity of its neighbours. If node $ \gamma $ is not directly connected to any of the sinks, and the neighbours of node $ \gamma $ are not connected to each other directly, the total number of edges is increased by $ d_\gamma(3-d_\gamma)$. Therefore, the number of edges decreases (by $ 2$) only when $ d_\gamma \in \{1,2\}$, and the number of edges does not change if the degree is $ 3$. For $ d_\gamma>3$ the number of edges increases by an amount that grows quadratically with $ d_\gamma$. The actual increase depends on how many connections already existed between the neighbours of $ \gamma $.

The order in which the intermediate nodes are detached does not change the final result and is unimportant if the graph is complete. For sparse graphs, however, the order can affect the running time significantly. If the degree distribution for successive graphs is sharp with the same average, $ d$, then the order in which the nodes are removed does not affect the complexity, which is $ \mathcal{O}(d^3 N)$. If the distributions are broad it is helpful to remove the nodes with smaller degrees first. A Fibonacci heap min-priority queue [276] was successfully used to achieve this result. The overhead for maintaining a heap is $ d_\gamma$ increase-key operations (of $ \mathcal{O}(\log (N))$ each) per execution of Algorithm B.4. Fortran and Python implementations of Algorithm B.4 algorithm are available online [274].

Random graphs provide an ideal testbed for the GT algorithm by providing control over the graph density. A random graph, $ R_N$, is obtained by starting with a set of $ N$ nodes and adding edges between them at random [33]. In this work we used a random graph model where each edge is chosen independently with probability $ \left<d\right>/(N-1)$, where $ \left <d\right >$ is the target value for the average degree.

The complexity for removal of $ N$ nodes can then be expressed as

$\displaystyle \mathcal{O}\left(\log(N)\sum_{i \in \{1,2,3,\dots,N\}} \left( d_{c(i)}^2 \sum_{j \in Adj[c(i)]} d_{j,c(i)}\right)\right),$ (6.43)

where $ d_{c(i)}$ is the degree of the node, $ c(i)$, removed at iteration $ i$, $ Adj[c(i)]$ is its adjacency list, and $ d_{j,c(i)}$ is the degree of the $ j$th neighbour of that node at iteration $ i$. The computational cost given in Equation 4.43 is difficult to express in terms of the parameters of the original graph, as the cost of every cycle depends on the distribution of degrees, the evolution of which, in turn, is dependent on the connectivity of the original graph in a non-trivial manner (see Figure 4.10). The storage requirements of a sparse-optimised version of GT algorithm scale linearly with the number of edges.

Figure: Evolution of the distribution of degrees for random graphs of different expected degree, $ \left <d\right >=5,10,15$, as labelled. This is a colour-coded projection of the probability mass function [277,278], $ P(d)$, of the distribution of degrees as a function of the number of the detached intermediate nodes, $ n$. The straight line shows $ P(d,n)$ for complete graph $ K_{1000}$. All four graphs contain a single source, 999 intermediate nodes and a single sink. The transformation was done using sparse-optimised version of Algorithm B.3 with Fibonacci-heap-based min-priority queue. It can be seen that as the intermediate nodes are detached the density of the graph that is being transformed grows. The expected degree of the initial graph determines how soon the maximum density will be reached.
\begin{psfrags}
\psfrag{d} [bc][bc]{$d$}
\psfrag{n} [bc][bc]{$n$}
\psfrag{...
...ne{\includegraphics[width=0.8\textheight]{markov/devolution.eps}}
\end{psfrags}

To investigate the dependence of the cost of the GT method on the number of nodes, $ N$, we have tested it on a series of random graphs $ R_N$ for different values of $ N$ and fixed average degree, $ \left <d\right >$. The results for three different values of $ \left <d\right >$ are shown in Figure 4.11. The motivation for choosing $ \left <d\right >$ from the interval $ \left[3,5\right]$ was the fact that most of our stationary point databases have average connectivities for the local minima that fall into this range.

Figure: CPU time needed to transform a sparse random graph $ R_{2N}$ using the GT approach described in Section 4.4 as a function of the number of intermediate nodes, $ N$. $ R_{2N}$ is composed of a single source node, $ N$ sink nodes and $ N-1$ intermediate nodes. For each value of $ N$ the data for three different values of the expected degree, $ \left <d\right >=3,4,5$, is shown, as labelled. Solid lines are analytic fits of the form $ c N^4$, where $ c=2.3\times 10^{-11}, 7.4\times 10^{-11}, 1.5\times 10^{-10}$ for $ \left <d\right >=3,4,5$, respectively. CPU time is in seconds.
\begin{psfrags}
\psfrag{CPU time / s} [bc][bc]{CPU time / s}
\psfrag{N} [bc]...
... [bc][bc]{$800$}
\centerline{\includegraphics{markov/CPUofN.ps}}
\end{psfrags}

It can be seen from Figure 4.11 that for sparse random graphs $ R_N$ the cost scales as $ \mathcal{O}(N^4)$ with a small $ \left <d\right >$-dependent prefactor. The dependence of the computational complexity on $ \left <d\right >$ is illustrated in Figure 4.12.

Figure: CPU time needed to transform a sparse random graph $ R_{2N}$ using the GT approach as a function of the expected degree, $ \left <d\right >$. The data is shown for three graphs with $ N=500$, $ 750$ and $ 1000$, as labelled. $ R_{2N}$ is composed of a single source node, $ N$ sink nodes and $ N-1$ intermediate nodes.
\begin{psfrags}
\psfrag{CPU time / s} [bc][bc]{CPU time / s}
\psfrag{<d>} [b...
... [br][br]{$600$}
\centerline{\includegraphics{markov/CPUofd.ps}}
\end{psfrags}

From Figure 4.10 it is apparent that at some point during the execution of the GT algorithm the graph reaches its maximum possible density. Once the graph is close to complete it is no longer efficient to employ a sparse-optimised algorithm. The most efficient approach we have found for sparse graphs is to use the sparse-optimised GT algorithm until the graph is dense enough, and then switch to Algorithm B.3. We will refer to this approach as SDGT. The change of data structures constitutes a negligible fraction of the total execution time. Figure 4.13 depicts the dependence of the CPU time as a function of the switching parameter $ R_s$.

Figure: CPU time as a function of switching ratio $ R_s$ shown for random graphs of different expected degree, $ \left <d\right >=5,10,15$, as labelled. All three graphs contain a single source, 999 intermediate nodes and a single sink. The transformation was performed using the sparse-optimised version of Algorithm B.3 until the the ratio $ d_{c(i)}/n(i)$ became greater than $ R_s$. Then a partially transformed graph was converted into adjacency matrix format and the transformation was continued with Algorithm B.3. The optimal value of $ R_s$ lies in the interval $ [0.07,0.1]$. Note the log$ _{10}$ scale on both axes.
\begin{psfrags}
\psfrag{CPU time / s} [bc][bc]{$\log_{10}$(CPU time / s)}
\p...
...rline{\includegraphics[width=.47\textheight]{markov/CPUofRs.eps}}
\end{psfrags}
Whenever the ratio $ d_{c(i)}/n(i)$, where the $ d_{c(i)}$ is the degree of intermediate node $ c$ detached at iteration $ i$, and $ n(i)$ is the number of the nodes on a heap at iteration $ i$, is greater than $ R_s$, the partially transformed graph is converted from the adjacency list format into adjacency matrix format and the transformation is continued using Algorithm B.3. It can be seen from Figure 4.10 that for the case of a random graphs with a single sink, a single source and 999 intermediate nodes the optimal values of $ R_s$ lie in the interval $ [0.07,0.1]$.


next up previous contents
Next: Overlapping Sets of Sources Up: ENSEMBLES OF REARRANGEMENT PATHWAYS Previous: Graph Transformation Method   Contents
Semen A Trygubenko 2006-04-10