## Chapter 4Ensembles of Rearrangement Pathways

I dreamed a thousand new paths$\dots \phantom{\rule{0.3em}{0ex}}$
I woke and walked my old one.
Chinese proverb

### 4.1 Introduction

Stochastic processes are widely used to treat phenomena with random factors and noise. Markov processes are an important class of stochastic processes for which future transitions do not depend upon how the current state was reached. Markov processes restricted to a discrete, finite, or countably infinite state space are called Markov chains [123187188]. The parameter that is used to number all the states in the state space is called the time parameter. Many interesting problems of chemical kinetics concern the analysis of finite-state samples of otherwise infinite state space [9].

When analysing the kinetic databases obtained from discrete path sampling (DPS) studies [8] it can be difficult to extract the phenomenological rate constants for processes that occur over very long time scales [9]. DPS databases are composed of local minima of the potential energy surface (PES) and the transition states that connect them. While minima correspond to mechanically stable structures, the transition states specify how these structures interconvert and the corresponding rates. Whenever the potential energy barrier for the event of interest is large in comparison with ${k}_{B}T$ the event becomes rare, where $T$ is the temperature and ${k}_{B}$ is Boltzmanns constant.

The most important tools previously employed to extract kinetic information from a DPS stationary point database are the master equation [189], kinetic Monte Carlo [190191] (KMC) and matrix multiplication (MM) methods [8]. The system of linear master equations in its matrix formulation can be solved numerically to yield the time evolution of the occupation probabilities starting from an arbitrary initial distribution. This approach works well only for small problems, as the diagonalisation of the transition matrix, $P$, scales as the cube of the number of states [9]. In addition, numerical problems arise when the magnitude of the eigenvalues corresponding to the slowest relaxation modes approaches the precision of the zero eigenvalue corresponding to equilibrium [192]. The KMC approach is a stochastic technique that is commonly used to simulate the dynamics of various physical and chemical systems, examples being formation of crystal structures [193], nanoparticle growth [194] and diffusion [195]. The MM approach provides a way to sum contributions to phenomenological two-state rate constants from pathways that contain progressively more steps. It is based upon a steady-state approximation, and provides the corresponding solution to the linear master equation [189196]. The MM approach has been used to analyse DPS databases in a number of systems ranging from Lennard-Jones clusters [810] to biomolecules [133197].

Both the standard KMC and MM formulations provide rates at a computational cost that generally grows exponentially as the temperature is decreased. In this chapter we describe alternative methods that are deterministic and formally exact, where the computational requirements are independent of the temperature and the time scale on which the process of interest takes place.

#### 4.1.1 Graph Theory Representation of a Finite-state Markov Chain

In general, to fully define a Markov chain it is necessary to specify all the possible states of the system and the rules for transitions between them. Graph theoretical representations of finite-state Markov chains are widely used [187198200]. Here we adopt a digraph [154201] representation of a Markov chain, where nodes represent the states and edges represent the transitions of non-zero probability. The edge ${e}_{i,j}$ describes a transition from node $j$ to node $i$ and has a probability ${P}_{i,j}$ associated with it, which is commonly known as a routing or branching probability. A node can be connected to any number of other nodes. Two nodes of a graph are adjacent if there is an edge between them [202].

For digraphs all connections of a node are classified as incoming or outgoing. The total number of incoming connections is the in-degree of a node, while the total number of outgoing connections is the out-degree. In a symmetric digraph the in-degree and out-degree are the same for every node [154]. $\mathit{AdjIn}\left[i\right]$ is the set of indices of all nodes that are connected to node $i$ via incoming edges that finish at node $i$. Similarly, $\mathit{AdjOut}\left[i\right]$ is the set of the indices of all the nodes that are connected to node $i$ via outgoing edges from node $i$. The degree of a graph is the maximum degree of all of its nodes. The expectation value for the degree of an undirected graph is the average number of connections per node.

For any node $i$ the transition probabilities ${P}_{j,i}$ add up to unity,

 $\sum _{j}{P}_{j,i}=1,$ (4.1)

where the sum is over all $j\in \mathit{AdjOut}\left[i\right]$. Unless specified otherwise all sums are taken over the set of indices of adjacent nodes or, since the branching probability is zero for non-adjacent nodes, over the set of all the nodes.

In a computer program dense graphs are usually stored in the form of adjacency matrices [154]. For sparse graphs [201] a more compact but less efficient adjacency-lists-based data structure exists [154]. To store a graph representation of a Markov chain, in addition to connectivity information (available from the adjacency matrix), the branching probabilities must be stored. Hence for dense graphs the most convenient approach is to store a transition probability matrix [187] with transition probabilities for non-existent edges set to zero. For sparse graphs, both the adjacency list and a list of corresponding branching probabilities must be stored.

#### 4.1.2 The Kinetic Monte Carlo Method

The KMC method can be used to generate a memoryless (Markovian) random walk and hence a set of trajectories connecting initial and final states in a DPS database. Many trajectories are necessary to collect appropriate statistics. Examples of pathway averages that are usually obtained with KMC are the mean path length and the mean first passage time. Here the KMC trajectory length is the number of states (local minima of the PES in the current context) that the walker encounters before reaching the final state. The first passage time is defined as the time that elapses before the walker reaches the final state. For a given KMC trajectory the first passage time is calculated as the sum of the mean waiting times in each of the states encountered.

Within the canonical Metropolis Monte Carlo approach a step is always taken if the proposed move lowers the energy, while steps that raise the energy are allowed with a probability that decreases with the energy difference between the current and proposed states [32]. An efficient way to propagate KMC trajectories was suggested by Bortz, Kalos, and Lebowitz (BKL) [190]. According to the BKL algorithm, a step is chosen in such a way that the ratios between transition probabilities of different events are preserved, but rejections are eliminated. Figure 4.1 explains this approach for a simple discrete-time Markov chain. The evolution of an ordinary KMC trajectory is monitored by thetimeparameter $n$, $n\in 𝕎$, which is incremented by one every time a transition from any state is made. The random walker is in state $1$ at time $n=0$. The KMC trajectory is terminated whenever an absorbing state is encountered. As ${P}_{1,1}$ approaches unity transitions out of state $1$ become rare. To ensure that every time a random number is generated (one of the most time consuming steps in a KMC calculation) a move is made to a neighbouring state we average over the transitions from state $1$ to itself to obtain the Markov chain depicted in Figure 4.1 (b).

Transitions from state $1$ to itself can be modelled by a Bernoulli process [33] with the probability of success equal to ${P}_{1,1}$. The average time for escape from state $1$ is obtained as

 ${\tau }_{1}=\left(1-{P}_{1,1}\right)\sum _{n=0}^{\infty }\left(n+1\right){\left({P}_{1,1}\right)}^{n}=\frac{1}{\left(1-{P}_{1,1}\right)},$ (4.2)

which can be used as a measure of the efficiency of trapping [203]. Transition probabilities out of state $1$ are renormalised:

 $\begin{array}{ccc}{P}_{\alpha ,{1}^{\prime }}\hfill & =\hfill & \frac{{P}_{\alpha ,1}}{1-{P}_{1,1}},\hfill \\ \hfill \\ \hfill \\ {P}_{\beta ,{1}^{\prime }}\hfill & =\hfill & \frac{{P}_{\beta ,1}}{1-{P}_{1,1}}.\hfill \end{array}$ (4.3)

Similar ideas underlie the accelerated Monte Carlo algorithm suggested by Novotny [26]. According to this ‘Monte Carlo with absorbing Markov chains’ (MCAMC) method, at every step a Markov matrix, $P$, is formed, which describes the transitions in a subspace $S$ that contains the current state $\alpha$ , and a set of adjacent states that the random walker is likely to visit from $\alpha$. A trajectory length, $n$, for escape from $S$ is obtained by bracketing a uniformly distributed random variable, $r$, as

 (4.4)

Then an $n$-step leapfrog move is performed to one of the states $\gamma \notin S$ and the simulation clock is incremented by $n$. State $\gamma$ is chosen at random with probability

 (4.5)

where ${R}_{\gamma ,\alpha }$ is the transition probability from state $\alpha \in S$ to state $\gamma \notin S$. Both the BKL and MCAMC methods can be many orders of magnitude faster than the standard KMC method when kinetic traps are present.

In chemical kinetics transitions out of a state are described using a Poisson process, which can be considered a continuous-time analogue of Bernoulli trials. The transition probabilities are determined from the rates of the underlying transitions as

 ${P}_{j,i}=\frac{{k}_{j,i}}{\sum _{\alpha }{k}_{\alpha ,i}}.$ (4.6)

There may be several escape routes from a given state. Transitions from any state to directly connected states are treated as competing independent Poisson processes, which together generate a new Poisson distribution [179]. $n$ independent Poisson processes with rates ${k}_{1},\phantom{\rule{0.33em}{0ex}}{k}_{2},\phantom{\rule{0.33em}{0ex}}{k}_{3},\dots ,\phantom{\rule{0.33em}{0ex}}{k}_{n}$ combine to produce a Poisson process with rate $k={\sum }_{i=1}^{n}{k}_{i}$. The waiting time for a transition to occur to any connected state is then exponentially distributed as $k\mathrm{exp}\left(-\mathit{kt}\right)$ [204].

Given the exponential distribution of waiting times the mean waiting time in state $i$ before escape, ${\tau }_{i}$, is $1∕{\sum }_{j}{k}_{j,i}$ and the variance of the waiting time is simply ${\tau }_{i}^{2}$. Here ${k}_{j,i}$ is the rate constant for transitions from $i$ to $j$. When the average of the distribution of times is the property of interest, and not the distribution of times itself, it is sufficient to increment the simulation time by the mean waiting time rather than by a value drawn from the appropriate distribution [9205]. This modification to the original KMC formulation [206207] reduces the cost of the method and accelerates the convergence of KMC averages without affecting the results.

#### 4.1.3 Discrete Path Sampling

The result of a DPS simulation is a database of local minima and transition states from the PES [810]. To extract thermodynamic and kinetic properties from this database we require partition functions for the individual minima and rate constants, ${k}_{\alpha ,\beta }$, for the elementary transitions between adjacent minima $\beta$ and $\alpha$. We usually employ harmonic densities of states and statistical rate theory to obtain these quantities, but these details are not important here. To analyse the global kinetics we further assume Markovian transitions between adjacent local minima, which produces a set of linear (master) equations that governs the evolution of the occupation probabilities towards equilibrium [189196]

 $\frac{d{P}_{\alpha }\left(t\right)}{\mathit{dt}}=\sum _{\beta }{k}_{\alpha ,\beta }{P}_{\beta }\left(t\right)-{P}_{\alpha }\left(t\right)\sum _{\beta }{k}_{\beta ,\alpha },$ (4.7)

where ${P}_{\alpha }\left(t\right)$ is the occupation probability of minimum $\alpha$ at time $t$.

All the minima are classified into sets $A$, $B$ and $I$. When local equilibrium is assumed within the $A$ and $B$ sets we can write

 ${P}_{a}\left(t\right)=\frac{{P}_{a}^{\mathit{eq}}{P}_{A}\left(t\right)}{{P}_{A}^{\mathit{eq}}}\phantom{\rule{1em}{0ex}}\mathit{and}\phantom{\rule{1em}{0ex}}{P}_{b}\left(t\right)=\frac{{P}_{b}^{\mathit{eq}}{P}_{B}\left(t\right)}{{P}_{B}^{\mathit{eq}}},$ (4.8)

where ${P}_{A}\left(t\right)={\sum }_{a\in A}{P}_{a}\left(t\right)$ and ${P}_{B}\left(t\right)={\sum }_{b\in B}{P}_{b}\left(t\right)$. If the steady-state approximation is applied to all the intervening states $i\in I=\left\{{i}_{1},{i}_{2},{i}_{3},\dots ,{i}_{{n}_{i}}\right\}$, so that

 $\frac{d{P}_{i}\left(t\right)}{\mathit{dt}}=0,$ (4.9)

then Equation 4.7 can be written as [9]

 $\begin{array}{ccc}\frac{d{P}_{A}\left(t\right)}{\mathit{dt}}\hfill & =\hfill & {k}_{A,B}{P}_{B}\left(t\right)-{k}_{B,A}{P}_{A}\left(t\right),\hfill \\ \hfill \\ \hfill \\ \frac{d{P}_{B}\left(t\right)}{\mathit{dt}}\hfill & =\hfill & {k}_{B,A}{P}_{A}\left(t\right)-{k}_{A,B}{P}_{B}\left(t\right).\hfill \end{array}$ (4.10)

The rate constants ${k}_{A,B}$ and ${k}_{B,A}$ for forward and backward transitions between states $A$ and $B$ are the sums over all possible paths within the set of intervening minima of the products of the branching probabilities corresponding to the elementary transitions for each path:

 $\begin{array}{ccc}{k}_{A,B}^{\mathit{DPS}}\hfill & =\hfill & \sum _{a←b}^{\prime }\frac{{k}_{a,{i}_{1}}}{\sum _{{\alpha }_{1}}{k}_{{\alpha }_{1},{i}_{1}}}\frac{{k}_{{i}_{1},{i}_{2}}}{\sum _{{\alpha }_{2}}{k}_{{\alpha }_{2},{i}_{2}}}\cdots \frac{{k}_{{i}_{n-1},{i}_{n}}}{\sum _{{\alpha }_{n}}{k}_{{\alpha }_{n},{i}_{n}}}\frac{{k}_{{i}_{n},b}\phantom{\rule{0.3em}{0ex}}{P}_{b}^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}}\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{a←b}^{\prime }{P}_{a,{i}_{1}}{P}_{{i}_{1},{i}_{2}}\cdots {P}_{{i}_{n-1},{i}_{n}}\frac{{k}_{{i}_{n},b}\phantom{\rule{0.3em}{0ex}}{P}_{b}^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}},\hfill \end{array}$ (4.11)

and similarly for ${k}_{B,A}$ [8]. The sum is over all paths that begin from a state $b\in B$ and end at a state $a\in A$, and the prime indicates that paths are not allowed to revisit states in $B$. In previous contributions [810133197] this sum was evaluated using a weighted adjacency matrix multiplication (MM) method, which will be reviewed in Section 4.2.

#### 4.1.4 KMC and DPS Averages

We now show that the evaluation of the DPS sum in Equation 4.11 and the calculation of KMC averages are two closely related problems.

For KMC simulations we define sources and sinks that coincide with the set of initial states $B$ and final states $A$, respectively. Every cycle of KMC simulation involves the generation of a single KMC trajectory connecting a node $b\in B$ and a node $a\in A$. A source node $b$ is chosen from set $B$ with probability ${P}_{b}^{\mathit{eq}}∕{P}_{B}^{\mathit{eq}}$.

We can formulate the calculation of the mean first passage time from $B$ to $A$ in graph theoretical terms as follows. Let the digraph consisting of nodes for all local minima and edges for each transition state be $\mathsc{𝒢}$. The digraph consisting of all nodes except those belonging to region $A$ is denoted by $G$. We assume that there are no isolated nodes in $\mathsc{𝒢}$, so that all the nodes in $A$ can be reached from every node in $G$. Suppose we start a KMC simulation from a particular node $\beta \in G$. Let ${P}_{\alpha }\left(n\right)$ be the expected occupation probability of node $\alpha$ after $n$ KMC steps, with initial conditions ${P}_{\beta }\left(0\right)=1$ and ${P}_{\alpha \ne \beta }\left(0\right)=0$. We further define an escape probability for each $\alpha \in G$ as the sum of branching probabilities to nodes in $A$, i.e.

 ${\mathsc{ℰ}}_{\alpha }^{G}=\sum _{a\in A}{P}_{a,\alpha }.$ (4.12)

KMC trajectories terminate when they arrive at an $A$ minimum, and the expected probability transfer to the $A$ region at the $n$th KMC step is ${\sum }_{\alpha \in G}{\mathsc{ℰ}}_{\alpha }^{G}{P}_{\alpha }\left(n\right)$. If there is at least one escape route from $G$ to $A$ with a non-zero branching probability, then eventually all the occupation probabilities in $G$ must tend to zero and

 ${\Sigma }_{\beta }^{G}=\sum _{n=0}^{\infty }\sum _{\alpha \in G}{\mathsc{ℰ}}_{\alpha }^{G}{P}_{\alpha }\left(n\right)=1.$ (4.13)

We now rewrite ${P}_{\alpha }\left(n\right)$ as a sum over all $n$-step paths that start from $\beta$ and end at $\alpha$ without leaving $G$. Each path contributes to ${P}_{\alpha }\left(n\right)$ according to the appropriate product of $n$ branching probabilities, so that

 $\begin{array}{ccc}{\Sigma }_{\beta }^{G}\hfill & =\hfill & \sum _{\alpha \in G}{\mathsc{ℰ}}_{\alpha }^{G}\sum _{n=0}^{\infty }{P}_{\alpha }\left(n\right)\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{\alpha \in G}{\mathsc{ℰ}}_{\alpha }^{G}\sum _{n=0}^{\infty }\phantom{\rule{0.3em}{0ex}}\sum _{\Xi \left(n\right)}{P}_{\alpha ,{i}_{n-1}}{P}_{{i}_{n-1},{i}_{n-2}}\cdots {P}_{{i}_{2},{i}_{1}}{P}_{{i}_{1},\beta }\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{\alpha \in G}{\mathsc{ℰ}}_{\alpha }^{G}{\mathsc{𝒮}}_{\alpha ,\beta }^{G}=1,\hfill \end{array}$ (4.14)

where $\Xi \left(n\right)$ denotes the set of $n$-step paths that start from $\beta$ and end at $\alpha$ without leaving $G$, and the last line defines the pathway sum ${\mathsc{𝒮}}_{\alpha ,\beta }^{G}$.

It is clear from the last line of Equation 4.14 that for fixed $\beta$ the quantities ${\mathsc{ℰ}}_{\alpha }^{G}{\mathsc{𝒮}}_{\alpha ,\beta }^{G}$ define a probability distribution. However, the pathway sums ${\mathsc{𝒮}}_{\alpha ,\beta }^{G}$ are not probabilities, and may be greater than unity. In particular, ${\mathsc{𝒮}}_{\beta ,\beta }^{G}\ge 1$ because the path of zero length is included, which contributes one to the sum. Furthermore, the normalisation condition on the last line of Equation 4.14 places no restriction on ${\mathsc{𝒮}}_{\alpha ,\beta }^{G}$ terms for which ${\mathsc{ℰ}}_{\alpha }^{G}$ vanishes.

We can also define a probability distribution for individual pathways. Let ${\mathsc{𝒲}}_{\xi }$ be the product of branching probabilities associated with a path $\xi$ so that

 ${\mathsc{𝒮}}_{\alpha ,\beta }^{G}=\sum _{n=0}^{\infty }\sum _{\xi \in \Xi \left(n\right)}{\mathsc{𝒲}}_{\xi }\equiv \sum _{\xi \in \alpha ←\beta }{\mathsc{𝒲}}_{\xi },$ (4.15)

where $\alpha ←\beta$ is the set of all appropriate paths from $\beta$ to $\alpha$ of any length that can visit and revisit any node in $G$. If we focus upon paths starting from minima in region $B$

 $\sum _{b\in B}\frac{{P}_{b}^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}}\sum _{\alpha \in G}{\mathsc{ℰ}}_{\alpha }^{G}\sum _{\xi \in \alpha ←b}{\mathsc{𝒲}}_{\xi }=\sum _{b\in B}\frac{{P}_{b}^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}}\sum _{\alpha \in {G}_{A}}{\mathsc{ℰ}}_{\alpha }^{G}\sum _{\xi \in \alpha ←b}{\mathsc{𝒲}}_{\xi }=1,$ (4.16)

where ${G}_{A}$ is the set of nodes in $G$ that are adjacent to $A$ minima in the complete graph $\mathsc{𝒢}$, since ${\mathsc{ℰ}}_{\alpha }^{G}$ vanishes for all other nodes. We can rewrite this sum as

 $\sum _{\xi \in {G}_{A}←B}\frac{{P}_{b}^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}}{\mathsc{ℰ}}_{\alpha }^{G}{\mathsc{𝒲}}_{\xi }=\sum _{\xi \in A←B}\frac{{P}_{b}^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}}{\mathsc{𝒲}}_{\xi }=\sum _{\xi \in A←B}{\mathsc{𝒫}}_{\xi }=1,$ (4.17)

which defines the non-zero pathway probabilities ${\mathsc{𝒫}}_{\xi }$ for all paths that start from any node in set $B$ and finish at any node in set $A$. The paths $\xi \in A←B$ can revisit any minima in the $G$ set, but include just one $A$ minimum at the terminus. Note that ${\mathsc{𝒲}}_{\xi }$ and ${\mathsc{𝒫}}_{\xi }$ can be used interchangeably if there is only one state in set $B$.

The average of some property, ${Q}_{\xi }$, defined for each KMC trajectory, $\xi$, can be calculated from the ${\mathsc{𝒫}}_{\xi }$ as

 (4.18)

Of course, KMC simulations avoid this complete enumeration by generating trajectories with probabilities proportional to ${\mathsc{𝒫}}_{\xi }$, so that a simple running average can be used to calculate . In the following sections we will develop alternative approaches based upon evaluating the complete sum, which become increasingly efficient at low temperature. We emphasise that these methods are only applicable to problems with a finite number of states, which are assumed to be known in advance.

The evaluation of the DPS sum defined in Equation 4.11 can also be rewritten in terms of pathway probabilities:

 $\begin{array}{ccc}{k}_{A,B}^{\mathit{DPS}}\hfill & =\hfill & \sum _{n=0}^{\infty }\sum _{\Xi \left(n\right)}^{\prime }{P}_{\alpha ,{i}_{1}}{P}_{{i}_{1},{i}_{2}}\cdots {P}_{{i}_{n-1},{i}_{n}}\frac{{k}_{{i}_{n},\beta }\phantom{\rule{0.3em}{0ex}}{P}_{\beta }^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}},\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{n=0}^{\infty }\sum _{\Xi \left(n\right)}^{\prime }{P}_{\alpha ,{i}_{1}}{P}_{{i}_{1},{i}_{2}}\cdots {P}_{{i}_{n-1},{i}_{n}}{P}_{{i}_{n},\beta }{\tau }_{\beta }^{-1}\frac{{P}_{\beta }^{\mathit{eq}}}{{P}_{B}^{\mathit{eq}}}\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{\xi \in A←B}^{\prime }{\mathsc{𝒫}}_{\xi }{\tau }_{\beta }^{-1},\hfill \end{array}$ (4.19)

where the prime on the summation indicates that the paths are not allowed to revisit the $B$ region. We have also used the fact that ${k}_{{i}_{n},b}={P}_{{i}_{n},b}∕{\tau }_{b}$.

A digraph representation of the restricted set of pathways defined in Equation 4.19 can be created if we allow sets of sources and sinks to overlap. In that case all the nodes $A\cup B$ are defined to be sinks and all the nodes in $B$ are the sources, i.eevery node in set $B$ is both a source and a sink. The required sum then includes all the pathways that finish at sinks of type $A$, but not those that finish at sinks of type $B$. The case when sets of sources and sinks (partially) overlap is discussed in detail in Section 4.6.

#### 4.1.5 Mean Escape Times

Since the mean first passage time between states $B$ and $A$, or the escape time from a subgraph, is of particular interest, we first illustrate a means to derive formulae for these quantities in terms of pathway probabilities.

The average time taken to traverse a path $\xi ={\alpha }_{1},{\alpha }_{2},{\alpha }_{3},\dots ,{\alpha }_{l\left(\xi \right)}$ is calculated as ${𝔱}_{\xi }={\tau }_{{\alpha }_{1}}+{\tau }_{{\alpha }_{2}}+{\tau }_{{\alpha }_{3}},\dots ,{\tau }_{{\alpha }_{l\left(\xi \right)-1}}$, where ${\tau }_{\alpha }$ is the mean waiting time for escape from node $\alpha$, as above, ${\alpha }_{k}$ identifies the $k$th node along path $\xi$, and $l\left(\xi \right)$ is the length of path $\xi$. The mean escape time from a graph $G$ if started from node $\beta$ is then

 ${\mathsc{𝒯}}_{\beta }^{G}=\sum _{\xi \in A←\beta }{\mathsc{𝒫}}_{\xi }{𝔱}_{\xi }.$ (4.20)

If we multiply every branching probability, ${P}_{\alpha ,\beta }$, that appears in ${\mathsc{𝒫}}_{\xi }$ by $\mathrm{exp}\left(\zeta {\tau }_{\beta }\right)$ then the mean escape time can be obtained as:

 (4.21)

This approach is useful if we have analytic results for the total probability ${\Sigma }_{\beta }^{G}$, which may then be manipulated into formulae for ${\mathsc{𝒯}}_{\beta }^{G}$, and is standard practice in probability theory literature [208209]. The quantity ${P}_{\alpha ,\beta }{e}^{\zeta {\tau }_{\beta }}$ is similar to the $\zeta$ probabilitydescribed in Reference [208]. Analogous techniques are usually employed to obtain ${\mathsc{𝒯}}_{\beta }^{G}$ and higher moments of the first-passage time distribution from analytic expressions for the first-passage probability generating function (see, for example, References [210211]). We now define ${\stackrel{̃}{P}}_{\alpha ,\beta }={P}_{\alpha ,\beta }{e}^{\zeta {\tau }_{\beta }}$ and the related quantities

 $\begin{array}{ccc}\hfill {\stackrel{̃}{\mathsc{ℰ}}}_{\alpha }^{G}& =\hfill & \sum _{a\in A}{\stackrel{̃}{P}}_{a,\alpha }={\mathsc{ℰ}}_{\alpha }^{G}{e}^{\zeta {\tau }_{\alpha }},\hfill \\ \hfill \\ \hfill \\ \hfill {\stackrel{̃}{\mathsc{𝒲}}}_{\xi }& =\hfill & {\stackrel{̃}{P}}_{{\alpha }_{l\left(\xi \right)},{\alpha }_{l\left(\xi \right)-1}}{\stackrel{̃}{P}}_{{\alpha }_{l\left(\xi \right)-1},{\alpha }_{l\left(\xi \right)-2}}\dots {\stackrel{̃}{P}}_{{\alpha }_{2},{\alpha }_{1}}={\mathsc{𝒲}}_{\xi }{e}^{\zeta {𝔱}_{\xi }},\hfill \\ \hfill \\ \hfill \\ \hfill {\stackrel{̃}{\mathsc{𝒫}}}_{\xi }& =\hfill & {\stackrel{̃}{\mathsc{𝒲}}}_{\xi }{P}_{b}^{\mathit{eq}}∕{P}_{B}^{\mathit{eq}},\hfill \\ \hfill \\ \hfill \\ \hfill {\stackrel{̃}{\mathsc{𝒮}}}_{\alpha ,\beta }^{G}& =\hfill & \sum _{\xi \in \alpha ←\beta }{\stackrel{̃}{\mathsc{𝒲}}}_{\xi },\hfill \\ \hfill \\ \hfill \\ \hfill \mathit{and}\phantom{\rule{1em}{0ex}}{\stackrel{̃}{\Sigma }}_{\beta }^{G}& =\hfill & \sum _{\alpha \in G}{\stackrel{̃}{\mathsc{ℰ}}}_{\alpha }^{G}{\stackrel{̃}{\mathsc{𝒮}}}_{\alpha ,\beta }^{G}.\hfill \end{array}$ (4.22)

Note that etc., while the mean escape time can now be written as

 (4.23)

In the remaining sections we show how to calculate the pathway probabilities, ${\mathsc{𝒫}}_{\xi }$, exactly, along with pathway averages, such as the waiting time. Chain graphs are treated in Section 4.2 and the results are generalised to arbitrary graphs in Section 4.3.

### 4.2 Chain Graphs

A general account of the problem of the first passage time in chemical physics was given by Weiss as early as 1965 [212]. In Reference [212] he summarised various techniques for solving such problems to date, and gave a general formula for moments of the first passage time in terms of the Greens function of the Fokker-Plank operator. Explicit expressions for the mean first passage times in terms of the basic transition probabilities for the case of a one-dimensional random walk were obtained by Ledermann and Reuter in 1954 [213], Karlin and MacGregor in 1957 [214], Weiss in 1965 [212], Gardiner in 1985 [215], Van den Broeck in 1988 [216], Le Doussal in 1989 [217], Murthy and Kehr and Matan and Havlin in 1989-1990 [211218219], Raykin in 1992 [210], Bar-Haim and Klafter in 1998 [203], Pury andceres in 2003 [220], and Slutsky, Kardar and Mirny in 2004 [221222]. The one-dimensional random walk is therefore a very well researched topic, both in the field of probability and its physical and chemical applications. The results presented in this section differ in the manner of presentation.

A random walk in a discrete state space $S$, where all the states can be arranged on a linear chain in such a way that ${P}_{i,j}=0$ for all $|i-j|>1$, is called a one-dimensional or simple random walk (SRW). The SRW model attracts a lot of attention because of its rich behaviour and mathematical tractability. A well known example of its complexity is the anomalous diffusion law discovered by Sinai [223]. He showed that there is a dramatic slowing down of an ordinary power law diffusion (RMS displacement is proportional to ${\left(\mathrm{log}t\right)}^{2}$ instead of ${t}^{1∕2}$) if a random walker at each site $i$ experiences a random bias field ${B}_{i}={P}_{i,i+1}-{P}_{i,i-1}$. Stanley and Havlin generalised the Sinai model by introducing long-range correlations between the bias fields on each site and showed that the SRW can span a range of diffusive properties [224].

Although one-dimensional transport is rarely found on the macroscopic scale at a microscopic level there are several examples, such as kinesin motion along microtubules [225226], or DNA translocation through a nanopore [227228], so the SRW is interesting not only from a theoretical standpoint. There is a number of models that build upon the SRW that have exciting applications, examples being the SRW walk with branching and annihilation [229], and the SRW in the presence of random trappings [230]. Techniques developed for the SRW were applied to study more complex cases, such as, for example, multistage transport in liquids [231], random walks on fractals [232233], even-visiting random walks [234], self-avoiding random walks [235], random walks on percolation clusters [236237], and random walks on simple regular lattices [238239] and superlattices [240].

A presentation that discusses SRW first-passage probabilities in detail sufficient for our applications is that of Raykin [210]. He considered pathway ensembles explicitly and obtained the generating functions for the occupation probabilities of any lattice site for infinite, half-infinite and finite one-dimensional lattices with the random walker starting from an arbitrary lattice site. As we discuss below, these results have a direct application to the evaluation of the DPS rate constants augmented with recrossings. We have derived equivalent expressions for the first-passage probabilities independently, considering the finite rather than the infinite case, which we discuss in terms of chain digraphs below.

We define a chain as a digraph ${C}_{N}=\left(V,E\right)$ with $N$ nodes and $2\left(N-1\right)$ edges, where $V=\left\{{v}_{1},{v}_{2},\dots ,{v}_{N}\right\}$ and $E=\left\{{e}_{1,2},{e}_{2,1},{e}_{2,3},{e}_{3,2},\dots ,{e}_{N-2,N-1},{e}_{N-1,N-2}\right\}$. Adjacent nodes in the numbering scheme are connected via two oppositely directed edges, and these are the only connections. A transition probability ${P}_{\alpha ,\beta }$ is associated with every edge ${e}_{\alpha ,\beta }$, as described in Section 4.1.1. An $N$-node chain is depicted in Figure 4.2 as a subgraph of another graph.

The total probability of escape from the chain via node $N$ if started from node $1$ is of interest because it has previously been used to associate contributions to the total rate constant from unique paths in DPS studies [89]. We can restrict the sampling to paths without recrossings between intermediate minima if we perform the corresponding recrossing sums explicitly [8].

We denote a pathway in ${C}_{N}$ by the ordered sequence of node indices. The length of a pathway is the number of edges traversed. For example, the pathway $1,2,1,2,3,2,3$ has length $6$, starts at node $1$ and finishes at node $3$. The indices of the intermediate nodes $2,1,2,3,2$ are given in the order in which they are visited. The product of branching probabilities associated with all edges in a path was defined above as ${\mathsc{𝒲}}_{\xi }$. For example, the product for the above pathway is ${P}_{3,2}{P}_{2,3}{P}_{3,2}{P}_{2,1}{P}_{1,2}{P}_{2,1}$, which we can abbreviate as ${\mathsc{𝒲}}_{3,2,3,2,1,2,1}$. For a chain graph ${C}_{N}$, which is a subgraph of $\mathsc{𝒢}$, we also define the set of indices of nodes in $\mathsc{𝒢}$ that are adjacent to nodes in ${C}_{N}$ but not members of ${C}_{N}$ as $\mathit{Adj}\left[{C}_{N}\right]$. These nodes will be considered as sinks if we are interested in escape from ${C}_{N}$.

Analytical results for ${C}_{3}$ are easily derived:

 (4.24)

These sums converge if the cardinality of the set $\mathit{Adj}\left[{C}_{3}\right]$ is greater than zero. To prove this result consider a factor, $f$, of the form

 $f={P}_{\alpha ,\beta }{P}_{\beta ,\alpha }\sum _{m=0}^{\infty }{\left({P}_{\beta ,\gamma }{P}_{\gamma ,\beta }\right)}^{m},$ (4.25)

and assume that the branching probabilities are all non-zero, and that there is at least one additional escape route from $\alpha$, $\beta$ or $\gamma$. We know that ${P}_{\beta ,\gamma }{P}_{\gamma ,\beta }<{P}_{\gamma ,\beta }<1$ because ${P}_{\alpha ,\beta }+{P}_{\gamma ,\beta }\le 1$ and ${P}_{\alpha ,\beta }\ne 0$. Hence $f={P}_{\alpha ,\beta }{P}_{\beta ,\alpha }∕\left(1-{P}_{\beta ,\gamma }{P}_{\gamma ,\beta }\right)$. However, ${P}_{\alpha ,\beta }{P}_{\beta ,\alpha }+{P}_{\beta ,\gamma }{P}_{\gamma ,\beta }\le {P}_{\alpha ,\beta }+{P}_{\gamma ,\beta }\le 1$, and equality is only possible if ${P}_{\beta ,\alpha }={P}_{\beta ,\gamma }={P}_{\alpha ,\beta }+{P}_{\gamma ,\beta }=1$, which contradicts the assumption of an additional escape route. Hence ${P}_{\alpha ,\beta }{P}_{\beta ,\alpha }<1-{P}_{\beta ,\gamma }{P}_{\gamma ,\beta }$ and $f<1$. The pathway sums ${\mathsc{𝒮}}_{1,2}^{{C}_{3}}$, ${\mathsc{𝒮}}_{1,3}^{{C}_{3}}$, ${\mathsc{𝒮}}_{2,3}^{{C}_{3}}$ and ${\mathsc{𝒮}}_{3,3}^{{C}_{3}}$ can be obtained from Equation 4.24 by permuting the indices. The last two sums in Equation 4.24 are particularly instructive: the $n$th term in the sum for ${\mathsc{𝒮}}_{2,2}^{{C}_{3}}$ and the $n$th term in the sum for ${\mathsc{𝒮}}_{3,2}^{{C}_{3}}$ are the contributions from pathways of length $2n$ and $2n+1$, respectively.

The pathway sums ${\mathsc{𝒮}}_{\alpha ,\beta }^{{C}_{N}}$ can be derived for a general chain graph ${C}_{N}$ in terms of recursion relations, as shown in Appendix C. The validity of our results for ${C}_{N}$ was verified numerically using the matrix multiplication method described in Reference [8]. For a chain of length $N$ we construct an $N×N$ transition probability matrix $P$ with elements

 (4.26)

The matrix form of the system of Chapman-Kolmogorov equations [187] for homogeneous discrete-time Markov chains [123187] allows us to obtain the $n$-step transition probability matrix recursively as

 $P\left(n\right)=\mathit{PP}\left(n-1\right)={P}^{n}.$ (4.27)

${\mathsc{𝒮}}_{\alpha ,\beta }^{{C}_{N}}$ can then be computed as

 (4.28)

where the number of matrix multiplications, $M$, is adjusted dynamically depending on the convergence properties of the above sum. We note that sink nodes are excluded when constructing $P$ and ${\sum }_{j}{P}_{j,i}$ can be less than unity.

For chains a sparse-optimised matrix multiplication method for ${\mathsc{𝒮}}_{\alpha ,\beta }^{{C}_{N}}$ scales as $\mathsc{𝒪}\left(\mathit{NM}\right)$, and may suffer from convergence and numerical precision problems for larger values of $N$ and branching probabilities that are close to zero or unity [8]. The summation method presented in this section can be implemented to scale as $\mathsc{𝒪}\left(N\right)$ with constant memory requirements (Algorithm B.1). It therefore constitutes a faster, more robust and more precise alternative to the matrix multiplication method when applied to chain graph topologies (Figure 4.3).

Mean escape times for ${C}_{3}$ are readily obtained from the results in Equation 4.24 by applying the method outlined in Section 4.1.5:

 $\begin{array}{ccc}{\mathsc{𝒯}}_{1}^{{C}_{3}}\hfill & =\hfill & \frac{{\tau }_{1}\left(1-{\mathsc{𝒲}}_{2,3,2}\right)+{\tau }_{2}{P}_{2,1}+{\tau }_{3}{\mathsc{𝒲}}_{3,2,1}}{1-{\mathsc{𝒲}}_{1,2,1}-{\mathsc{𝒲}}_{2,3,2}},\hfill \\ \hfill \\ \hfill \\ {\mathsc{𝒯}}_{2}^{{C}_{3}}\hfill & =\hfill & \frac{{\tau }_{1}{P}_{1,2}+{\tau }_{2}+{\tau }_{3}{P}_{3,2}}{1-{\mathsc{𝒲}}_{1,2,1}-{\mathsc{𝒲}}_{2,3,2}},\hfill \end{array}$ (4.29)

and ${\mathsc{𝒯}}_{3}^{{C}_{3}}$ can be obtained from ${\mathsc{𝒯}}_{1}^{{C}_{3}}$ by permuting the subscripts 1 and 3.

The mean escape time from the ${C}_{N}$ graph if started from node $\beta$ can be calculated recursively using the results of Appendix D and Section 4.1.5 or by resorting to a first-step analysis [241].

### 4.3 Complete Graphs

In a complete digraph each pair of nodes is connected by two oppositely directed edges [155]. The complete graph with $N$ graph nodes is denoted ${K}_{N}=\left(V,E\right)$, and has $N$ nodes and $N\left(N-1\right)$ edges, remembering that we have two edges per connection (Figure 4.4).

Due to the complete connectivity we need only consider two cases: when the starting and finishing nodes are the same and when they are distinct. We employ complete graphs for the purposes of generality. An arbitrary graph ${G}_{N}$ is a subgraph of ${K}_{N}$ with transition probabilities for non-existent edges set to zero. All the results in this section are therefore equally applicable to arbitrary graphs.

The complete graph ${K}_{2}$ will not be considered here as it is topologically identical to the graph ${C}_{2}$. The difference between the ${K}_{3}$ and ${C}_{3}$ graphs is the existence of edges that connect nodes $1$ and $3$. Pathways confined to ${K}_{3}$ can therefore contain cycles, and for a given path length they are significantly more numerous (Figure 4.5).

The ${\mathsc{𝒮}}_{\alpha ,\beta }^{{K}_{3}}$ can again be derived analytically for this graph:

 (4.30)

The results for any other possibility can be obtained by permuting the node indices appropriately.

The pathway sums ${\mathsc{𝒮}}_{\alpha ,\beta }^{{K}_{N}}$ for larger complete graphs can be obtained by recursion. For ${\mathsc{𝒮}}_{N,N}^{{K}_{N}}$ any path leaving from and returning to $N$ can be broken down into a step out of $N$ to any $i, all possible paths between $i$ and $j within ${K}_{N-1}$, and finally a step back to $N$ from $j$. All such paths can be combined together in any order, so we have a multinomial distribution [242]:

 (4.31)

To evaluate ${\mathsc{𝒮}}_{1,N}^{{K}_{N}}$ we break down the sum into all paths that depart from and return to $N$, followed by all paths that leave node $N$ and reach node 1 without returning to $N$. The first contribution corresponds to a factor of ${\mathsc{𝒮}}_{N,N}^{{K}_{N}}$, and the second produces a factor ${P}_{i,N}{\mathsc{𝒮}}_{1,i}^{{K}_{N-1}}$ for every $i:

 ${\mathsc{𝒮}}_{1,N}^{{K}_{N}}={\mathsc{𝒮}}_{N,N}^{{K}_{N}}\sum _{i=1}^{N-1}{\mathsc{𝒮}}_{1,i}^{{K}_{N-1}}{P}_{i,N},$ (4.32)

where ${\mathsc{𝒮}}_{1,1}^{{K}_{1}}$ is defined to be unity. Any other ${\mathsc{𝒮}}_{\alpha ,\beta }^{{K}_{N}}$ can be obtained by a permutation of node labels.

Algorithm B.2 provides an example implementation of the above formulae optimised for incomplete graphs. The running time of Algorithm B.2 depends strongly on the graph density. (A digraph in which the number of edges is close to the maximum value of $N\left(N-1\right)$ is termed a dense digraph [202].) For ${K}_{N}$ the algorithm runs in $\mathsc{𝒪}\left({N}^{2N}\right)$ time, while for an arbitrary graph it scales as $\mathsc{𝒪}\left({d}^{2N}\right)$, where $d$ is the average degree of the nodes. For chain graphs the algorithm runs in $\mathsc{𝒪}\left(N\right)$ time and therefore constitutes a recursive-function-based alternative to Algorithm B.1 with linear memory requirements. For complete graphs an alternative implementation with $\mathsc{𝒪}\left({\left(N!\right)}^{2}\right)$ scaling is also possible.

Although the scaling of the above algorithm with $N$ may appear disastrous, it does in fact run faster than standard KMC and MM approaches for graphs where the escape probabilities are several orders of magnitude smaller than the transition probabilities (Algorithm B.2). Otherwise, for anything but moderately branched chain graphs, Algorithm B.2 is significantly more expensive. However, the graph-transformation-based method presented in Section 4.4 yields both the pathway sums and the mean escape times for a complete graph ${K}_{N}$ in $\mathsc{𝒪}\left({N}^{3}\right)$ time, and is the fastest approach that we have found.

Mean escape times for ${K}_{3}$ are readily obtained from the results in Equation 4.30 by applying the method outlined in Section 4.1.5:

 ${\mathsc{𝒯}}_{1}^{{K}_{3}}=\frac{{\tau }_{1}\left(1-{\mathsc{𝒲}}_{2,3,2}\right)+{\tau }_{2}\left({P}_{2,1}+{\mathsc{𝒲}}_{2,3,1}\right)+{\tau }_{3}\left({P}_{3,1}+{\mathsc{𝒲}}_{3,2,1}\right)}{1-{\mathsc{𝒲}}_{1,2,1}-{\mathsc{𝒲}}_{2,3,2}-{\mathsc{𝒲}}_{3,1,3}-{\mathsc{𝒲}}_{1,2,3,1}-{\mathsc{𝒲}}_{1,3,2,1}}.$ (4.33)

We have verified this result analytically using first-step analysis and numerically for various values of the parameters ${\tau }_{i}$ and ${P}_{\alpha ,\beta }$. and obtained quantitative agreement (see Figure 4.6).

Figure 4.7 demonstrates how the advantage of exact summation over KMC and MM becomes more pronounced as the escape probabilities become smaller.

### 4.4 Graph Transformation Method

The problem of calculation of the properties of a random walk on irregular networks was addressed previously by Goldhirsch and Gefen [208209]. They described a generating-function-based method where an ensemble of pathways is partitioned intobasic walks’. A walk was defined as a set of paths that satisfies a certain restriction. As the probability generating functions corresponding to these basic walks multiply, the properties of a network as a whole can be inferred given knowledge of the generating functions corresponding to these basic walks. The method was applied to a chain, a loopless regularly branched chain and a chain containing a single loop. To the best of our knowledge only one [243] out of the 30 papers [209211219222240243264] that cite the work of Goldhirsch and Gefen [208] is an application, perhaps due to the complexity of the method.

Here we present a graph transformation (GT) approach for calculating the pathway sums and the mean escape times for ${K}_{N}$. In general, it is applicable to arbitrary digraphs, but the best performance is achieved when the graph in question is dense. The algorithm discussed in this section will be referred to as DGT (D for dense). A sparse-optimised version of the GT method (SGT) will be discussed in Section 4.5.

The GT approach is similar in spirit to the ideas that lie behind the mean value analysis and aggregation/disaggregation techniques commonly used in the performance and reliability evaluation of queueing networks [187265267]. It is also loosely related to dynamic graph algorithms [268271], which are used when a property is calculated on a graph subject to dynamic changes, such as deletions and insertions of nodes and edges. The main idea is to progressively remove nodes from a graph whilst leaving the average properties of interest unchanged. For example, suppose we wish to remove node $x$ from graph $G$ to obtain a new graph ${G}^{\prime }$. Here we assume that $x$ is neither source nor sink. Before node $x$ can be removed the property of interest is averaged over all the pathways that include the edges between nodes $x$ and $i\in \mathit{Adj}\left[x\right]$. The averaging is performed separately for every node $i$. Next, we will use the waiting time as an example of such a property and show that the mean first passage time in the original and transformed graphs is the same.

The theory is an extension of the results used to perform jumps to second neighbours in previous KMC simulations [8272]. Consider KMC trajectories that arrive at node $i$, which is adjacent to $x$. We wish to step directly from $i$ to any node in the set of nodes $\Gamma$ that are adjacent to $i$ or $x$, excluding these two nodes themselves. To ensure that the mean first-passage times from sources to sinks calculated in $G$ and ${G}^{\prime }$ are the same we must define new branching probabilities, ${P}_{\gamma ,i}^{\prime }$ from $i$ to all $\gamma \in \Gamma$, along with a new waiting time for escape from $i$, ${\tau }_{i}^{\prime }$. Here, ${\tau }_{i}^{\prime }$ corresponds to the mean waiting time for escape from $i$ to any $\gamma \in \Gamma$, while the modified branching probabilities subsume all the possible recrossings involving node $x$ that could occur in $G$ before a transition to a node in $\Gamma$. Hence the new branching probabilities are:

 ${P}_{\gamma ,i}^{\prime }=\left({P}_{\gamma ,x}{P}_{x,i}+{P}_{\gamma ,i}\right)\sum _{m=0}^{\infty }{\left({P}_{i,x}{P}_{x,i}\right)}^{m}=\left({P}_{\gamma ,x}{P}_{x,i}+{P}_{\gamma ,i}\right)∕\left(1-{P}_{i,x}{P}_{x,i}\right).$ (4.34)

This formula can also be applied if either ${P}_{\gamma ,i}$ or ${P}_{\gamma ,x}$ vanishes.

It is easy to show that the new branching probabilities are normalised:

 $\sum _{\gamma \in \Gamma }\frac{{P}_{\gamma ,x}{P}_{x,i}+{P}_{\gamma ,i}}{1-{P}_{i,x}{P}_{x,i}}=\frac{\left(1-{P}_{i,x}\right){P}_{x,i}+\left(1-{P}_{x,i}\right)}{1-{P}_{i,x}{P}_{x,i}}=1.$ (4.35)

To calculate ${\tau }_{i}^{\prime }$ we use the method of Section 4.1.4:

 (4.36)

The modified branching probabilities and waiting times could be used in a KMC simulation based upon graph ${G}^{\prime }$. Here we continue to use the notation of Section 4.1.4, where sinks correspond to nodes $a\in A$ and sources to nodes in $b\in B$, and $G$ contains all the nodes in $\mathsc{𝒢}$ expect for the $A$ set, as before. Since the modified branching probabilities, ${P}_{\gamma ,i}^{\prime }$, in ${G}^{\prime }$ subsume the sums over all path paths from $i$ to $\gamma$ that involve $x$ we would expect the sink probability, ${\Sigma }_{a,b}^{G}$, of a trajectory starting at $b$ ending at sink $a$, to be conserved. However, since each trajectory exiting from $\gamma \in \Gamma$ acquires a time increment equal to the average value, ${\tau }_{i}^{\prime }$, the mean first-passage times to individual sinks, ${\mathsc{𝒯}}_{a,b}^{G}$, are not conserved in ${G}^{\prime }$ (unless there is a single sink). Nevertheless, the overall mean first-passage time to $A$ is conserved, i.e${\sum }_{a\in A}{\mathsc{𝒯}}_{a,b}^{{G}^{\prime }}={\mathsc{𝒯}}_{b}^{{G}^{\prime }}={\mathsc{𝒯}}_{b}^{G}$. To prove these results more formally within the framework of complete sums consider the effect of removing node $x$ on trajectories reaching node $i\in \mathit{Adj}\left[x\right]$ from a source node $b$. The sink probability for a particular sink $a$ is

 $\begin{array}{ccc}{\Sigma }_{a,b}^{G}\hfill & =\hfill & \sum _{\xi \in a←b}{\mathsc{𝒲}}_{\xi }\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{{\xi }_{1}\in i←b}{\mathsc{𝒲}}_{{\xi }_{1}}\sum _{\gamma \in \Gamma }\left({P}_{\gamma ,i}+{P}_{x,i}{P}_{\gamma ,x}\right)\sum _{m=0}^{\infty }{\left({P}_{i,x}{P}_{x,i}\right)}^{m}\sum _{{\xi }_{2}\in a←\gamma }{\mathsc{𝒲}}_{{\xi }_{2}}\hfill \\ \hfill \\ \hfill \\ \hfill & =\hfill & \sum _{{\xi }_{1}\in i←b}{\mathsc{𝒲}}_{{\xi }_{1}}\sum _{\gamma \in \Gamma }{P}_{\gamma ,i}^{\prime }\sum _{{\xi }_{2}\in a←\gamma }{\mathsc{𝒲}}_{{\xi }_{2}},\hfill \end{array}$ (4.37)

and similarly for any other node adjacent to $x$. Hence the transformation preserves the individual sink probabilities for any source.

Now consider the effect of removing node $x$ on the mean first-passage time from source $b$ to sink $a$, ${\mathsc{𝒯}}_{a,b}^{{G}^{\prime }}$, using the approach of Section 4.1.4.

 (4.38)

where the tildes indicate that every branching probability ${P}_{\alpha ,\beta }$ is replaced by ${P}_{\alpha ,\beta }{e}^{\xi {\tau }_{\beta }}$, as above. The first and last terms are unchanged from graph $G$ in this construction, but the middle term,

 (4.39)

is different (unless there is only one sink). However, if we sum over sinks then

 $\sum _{a\in A}\sum _{{\xi }_{2}\in a←\gamma }{\mathsc{𝒲}}_{{\xi }_{2}}=1$ (4.40)

for all $\gamma$, and we can now simplify the sum over $\gamma$ as

 $\sum _{\gamma \in \Gamma }\frac{{P}_{\gamma ,x}{P}_{x,i}\left({\tau }_{i}+{\tau }_{x}\right)+{P}_{\gamma ,i}\left({\tau }_{i}+{P}_{i,x}{P}_{x,i}{\tau }_{x}\right)}{{\left(1-{P}_{i,x}{P}_{x,i}\right)}^{2}}={\tau }_{i}^{\prime }=\sum _{\gamma \in \Gamma }{P}_{\gamma ,i}^{\prime }{\tau }_{i}^{\prime }.$ (4.41)

The same argument can be applied whenever a trajectory reaches a node adjacent to $x$, so that ${\mathsc{𝒯}}_{b}^{G}={\mathsc{𝒯}}_{b}^{{G}^{\prime }}$, as required.

The above procedure extends the BKL approach [190] to exclude not only the transitions from the current state into itself but also transitions involving an adjacent node $x$. Alternatively, this method could be viewed as a coarse-graining of the Markov chain. Such coarse-graining is acceptable if the property of interest is the average of the distribution of times rather than the distribution of times itself. In our simulations the average is the only property of interest. In cases when the distribution itself is sought, the approach described here may still be useful and could be the first step in the analysis of the distribution of escape times, as it yields the exact average of the distribution.

The transformation is illustrated in Figure 4.8 for the case of a single source.

Figure 4.8 (a) displays the original graph and its parametrisation. During the first iteration of the algorithm node $2$ is removed to yield the graph depicted in Figure 4.8 (b). This change reduces the dimensionality of the original graph, as the new graph contains one node and three edges fewer. The result of the second, and final, iteration of the algorithm is a graph that contains source and sink nodes only, with the correct transition probabilities and mean waiting time [Figure 4.8 (c)].

We now describe algorithms to implement the above approach and calculate mean escape times from complete graphs with multiple sources and sinks. Listings for some of the algorithms discussed here are given in Appendix B. We follow the notation of Section 4.1.4 and consider a digraph ${\mathsc{𝒢}}_{N}$ consisting of ${N}_{B}$ source nodes, ${N}_{A}$ sink nodes, and ${N}_{I}$ intervening nodes. ${\mathsc{𝒢}}_{N}$ therefore contains the subgraph ${G}_{{N}_{I}+{N}_{B}}$.

The result of the transformation of a graph with a single source $b$ and ${N}_{A}$ sinks using Algorithm B.3 is the mean escape time ${\mathsc{𝒯}}_{b}^{{G}_{{N}_{I}+1}}$ and ${N}_{A}$ pathway probabilities ${\mathsc{𝒫}}_{\xi }$, $\xi \in A←b$. Solving a problem with ${N}_{B}$ sources is equivalent to solving ${N}_{B}$ single source problems. For example, if there are two sources ${b}_{1}$ and ${b}_{2}$ we first solve a problem where only node ${b}_{1}$ is set to be the source to obtain ${\mathsc{𝒯}}_{{b}_{1}}^{{G}_{{N}_{I}+{N}_{B}}}$ and the pathway sums from ${b}_{1}$ to every sink node $a\in A$. The same procedure is then followed for ${b}_{2}$.

The form of the transition probability matrix $P$ is illustrated below at three stages: first for the original graph, then at the stage when all the intervening nodes have been removed (line 16 in Algorithm B.3), and finally at the end of the procedure:

 (4.42)

Each matrix is split into blocks that specify the transitions between the nodes of a particular type, as labelled. Upon termination, every element in the top right block of matrix $P$ is non-zero.

Algorithm B.3 uses the adjacency matrix representation of graph ${\mathsc{𝒢}}_{N}$, for which the average of the distribution of mean first passage times is to be obtained. For efficiency, when constructing the transition probability matrix $P$ we order the nodes with the sink nodes first and the source nodes last. Algorithm B.3 is composed of two parts. The first part (lines 1-16) iteratively removes all the intermediate nodes from graph ${\mathsc{𝒢}}_{N}$ to yield a graph that is composed of sink nodes and source nodes only. The second part (lines 17-34) disconnects the source nodes from each other to produce a graph with ${N}_{A}+{N}_{B}$ nodes and ${\left({N}_{A}+{N}_{B}\right)}^{2}$ directed edges connecting each source with every sink. Lines 13-15 are not required for obtaining the correct answer, but the final transition probability matrix looks neater.

The computational complexity of lines 1-16 of Algorithm B.3 is $\mathsc{𝒪}\left({N}_{I}^{3}+{N}_{I}^{2}{N}_{B}+{N}_{I}^{2}{N}_{A}+{N}_{I}{N}_{B}^{2}+{N}_{I}{N}_{B}{N}_{A}\right)$. The second part of Algorithm B.3 (lines 17-34) scales as $\mathsc{𝒪}\left({N}_{B}^{3}+{N}_{B}^{2}{N}_{A}\right)$. The total complexity for the case of a single source and for the case when there are no intermediate nodes is $\mathsc{𝒪}\left({N}_{I}^{3}+{N}_{I}^{2}{N}_{A}\right)$ and $\mathsc{𝒪}\left({N}_{B}^{3}+{N}_{B}^{2}{N}_{A}\right)$, respectively. The storage requirements of Algorithm B.3 scale as .

We have implemented the algorithm in Fortran 95 and timed it for complete graphs of different sizes. The results presented in Figure 4.9 confirm the overall cubic scaling. The program is GPL-licensed [273] and available online [274]. These and other benchmarks presented in this chapter were obtained for a single Intel${}^{\text{®}}$ Pentium${}^{\text{®}}$ 4 3.00GHz 512Kb cache processor running under the Debian GNU/Linux operating system [275]. The code was compiled and optimised using the Intel${}^{\text{®}}$ Fortran compiler for Linux.

### 4.5 Applications to Sparse Random Graphs

Algorithm B.3 could easily be adapted to use adjacency-lists-based data structures [154], resulting in a faster execution and lower storage requirements for sparse graphs. We have implemented [274] a sparse-optimised version of Algorithm B.3 because the graph representations of the Markov chains of interest in the present work are sparse [201].

The algorithm for detaching a single intermediate node from an arbitrary graph stored in a sparse-optimised format is given in Algorithm B.4. Having chosen the node to be removed, $\gamma$, all the neighbours $\beta \in \mathit{Adj}\left[\gamma \right]$ are analysed in turn, as follows. Lines 3-9 of Algorithm B.4 find node $\gamma$ in the adjacency list of node $\beta$. If $\beta$ is not a sink, lines 11-34 are executed to modify the adjacency list of node $\beta$: lines 13-14 delete node $\gamma$ from the adjacency list of $\beta$, while lines 15-30 make all the neighbours $\alpha \in \mathit{Adj}\left[\gamma \right]\ominus \beta$ of node $\gamma$ the neighbours of $\beta$. The symbol $\ominus$ denotes the union minus the intersection of two sets, otherwise known as the symmetric difference. If the edge $\beta \to \alpha$ already existed only the branching probability is changed (line 21). Otherwise, a new edge is created and the adjacency and branching probability lists are modified accordingly (line 26 and line 27, respectively). Finally, the branching probabilities of node $\beta$ are renormalised (lines 31-33) and the waiting time for node $\beta$ is increased (line 34).

Algorithm B.4 is invoked iteratively for every node that is neither a source nor a sink to yield a graph that is composed of source nodes and sink nodes only. Then the procedure described in Section 4.4 for disconnection of source nodes (lines 17-34 of Algorithm B.3) is applied to obtain the mean escape times for every source node. The sparse-optimised version of the second part of Algorithm B.3 is straightforward and is therefore omitted here for brevity.

The running time of Algorithm B.4 is $\mathsc{𝒪}\left({d}_{c}{\sum }_{i\in \mathit{Adj}\left[c\right]}{d}_{i}\right)$, where ${d}_{k}$ is the degree of node $k$. For the case when all the nodes in a graph have approximately the same degree, $d$, the complexity is $\mathsc{𝒪}\left({d}^{3}\right)$. Therefore, if there are $N$ intermediate nodes to be detached and $d$ is of the same order of magnitude as $N$, the cost of detaching $N$ nodes is $\mathsc{𝒪}\left({N}^{4}\right)$. The asymptotic bound is worse than that of Algorithm B.3 because of the searches through adjacency lists (lines 3-9 and lines 19-24). If $d$ is sufficiently small the algorithm based on adjacency lists is faster.

After each invocation of Algorithm B.4 the number of nodes is always decreased by one. The number of edges, however, can increase or decrease depending on the in- and out-degree of the node to be removed and the connectivity of its neighbours. If node $\gamma$ is not directly connected to any of the sinks, and the neighbours of node $\gamma$ are not connected to each other directly, the total number of edges is increased by ${d}_{\gamma }\left(3-{d}_{\gamma }\right)$. Therefore, the number of edges decreases (by $2$) only when ${d}_{\gamma }\in \left\{1,2\right\}$, and the number of edges does not change if the degree is $3$. For ${d}_{\gamma }>3$ the number of edges increases by an amount that grows quadratically with ${d}_{\gamma }$. The actual increase depends on how many connections already existed between the neighbours of $\gamma$.

The order in which the intermediate nodes are detached does not change the final result and is unimportant if the graph is complete. For sparse graphs, however, the order can affect the running time significantly. If the degree distribution for successive graphs is sharp with the same average, $d$, then the order in which the nodes are removed does not affect the complexity, which is $\mathsc{𝒪}\left({d}^{3}N\right)$. If the distributions are broad it is helpful to remove the nodes with smaller degrees first. A Fibonacci heap min-priority queue [276] was successfully used to achieve this result. The overhead for maintaining a heap is ${d}_{\gamma }$ increase-key operations (of $\mathsc{𝒪}\left(\mathrm{log}\left(N\right)\right)$ each) per execution of Algorithm B.4. Fortran and Python implementations of Algorithm B.4 algorithm are available online [274].

Random graphs provide an ideal testbed for the GT algorithm by providing control over the graph density. A random graph, ${R}_{N}$, is obtained by starting with a set of $N$ nodes and adding edges between them at random [33]. In this work we used a random graph model where each edge is chosen independently with probability , where is the target value for the average degree.

The complexity for removal of $N$ nodes can then be expressed as

 (4.43)

where ${d}_{c\left(i\right)}$ is the degree of the node, $c\left(i\right)$, removed at iteration $i$, $\mathit{Adj}\left[c\left(i\right)\right]$ is its adjacency list, and ${d}_{j,c\left(i\right)}$ is the degree of the $j$th neighbour of that node at iteration $i$. The computational cost given in Equation 4.43 is difficult to express in terms of the parameters of the original graph, as the cost of every cycle depends on the distribution of degrees, the evolution of which, in turn, is dependent on the connectivity of the original graph in a non-trivial manner (see Figure 4.10). The storage requirements of a sparse-optimised version of GT algorithm scale linearly with the number of edges.

To investigate the dependence of the cost of the GT method on the number of nodes, $N$, we have tested it on a series of random graphs ${R}_{N}$ for different values of $N$ and fixed average degree, . The results for three different values of are shown in Figure 4.11. The motivation for choosing from the interval was the fact that most of our stationary point databases have average connectivities for the local minima that fall into this range.

It can be seen from Figure 4.11 that for sparse random graphs ${R}_{N}$ the cost scales as $\mathsc{𝒪}\left({N}^{4}\right)$ with a small -dependent prefactor. The dependence of the computational complexity on is illustrated in Figure 4.12.

From Figure 4.10 it is apparent that at some point during the execution of the GT algorithm the graph reaches its maximum possible density. Once the graph is close to complete it is no longer efficient to employ a sparse-optimised algorithm. The most efficient approach we have found for sparse graphs is to use the sparse-optimised GT algorithm until the graph is dense enough, and then switch to Algorithm B.3. We will refer to this approach as SDGT. The change of data structures constitutes a negligible fraction of the total execution time. Figure 4.13 depicts the dependence of the CPU time as a function of the switching parameter ${R}_{s}$.

Whenever the ratio ${d}_{c\left(i\right)}∕n\left(i\right)$, where the ${d}_{c\left(i\right)}$ is the degree of intermediate node $c$ detached at iteration $i$, and $n\left(i\right)$ is the number of the nodes on a heap at iteration $i$, is greater than ${R}_{s}$, the partially transformed graph is converted from the adjacency list format into adjacency matrix format and the transformation is continued using Algorithm B.3. It can be seen from Figure 4.10 that for the case of a random graphs with a single sink, a single source and 999 intermediate nodes the optimal values of ${R}_{s}$ lie in the interval $\left[0.07,0.1\right]$.

### 4.6 Overlapping Sets of Sources and Sinks

We now return to the digraph representation of a Markov chain that corresponds to the DPS pathway ensemble discussed in Section 4.1.4. A problem with (partially) overlapping sets of sources and sinks can easily be converted into an equivalent problem where there is no overlap, and then the GT method discussed in Section 4.4 and Section 4.5 can be applied as normal.

As discussed above, solving a problem with $n$ sources reduces to solving $n$ single-source problems. We can therefore explain how to deal with a problem of overlapping sets of sinks and sources for a simple example where there is a single source-sink $i$ and, optionally, a number of sink nodes.

First, a new node ${i}^{\prime }$ is added to the set of sinks and its adjacency lists are initialised to $\mathit{AdjOut}\left[{i}^{\prime }\right]=\varnothing$ and $\mathit{AdjIn}\left[{i}^{\prime }\right]=\mathit{AdjIn}\left[i\right]$. Then, for every node $j\in \mathit{AdjOut}\left[i\right]$ we update its waiting time as ${\tau }_{j}={\tau }_{j}+{\tau }_{i}$ and add node $j$ to the set of sources with probabilistic weight initialised to ${P}_{j,i}{W}_{i}$, where ${W}_{i}$ is the original probabilistic weight of source $i$ (the probability of choosing source $i$ from the set of sources). Afterwards, the node $i$ is deleted.

### 4.7 Applications to Lennard-Jones Clusters

#### 4.7.1 ${O}_{h}↔{I}_{h}$Isomerisation of LJ${}_{38}$

We have applied the GT method to study the temperature dependence of the rate of ${O}_{h}↔{I}_{h}$ interconversion of 38-atom Lennard-Jones cluster. The PES sample was taken from a previous study [8] and contained 1740 minima and 2072 transition states. Only geometrically distinct structures were considered when generating this sample because this way the dimensionality of the problem is reduced approximately by a factor of $2N!∕h$, where $h$ is the order of the point group. Initial and final states in this sample roughly correspond to icosahedral-like and octahedral-like structures on the PES of this cluster. The assignment was made in Reference [8] by solving master equation numerically to find the eigenvector that corresponds to the smallest non-zero eigenvalue. As simple two-state dynamics are associated with exponential rise and decay of occupation probabilities there must exist a time scale on which all the exponential contributions to the solution of the master equation decay to zero except for the slowest [9]. The sign of the components of the eigenvector corresponding to the slowest mode was used to classify the minima as ${I}_{h}$ or ${O}_{h}$ in character [8].

The above sample was pruned to ensure that every minimum is reachable from any other minimum to create a digraph representation that contained 759 nodes including 43 source nodes and 5 sink nodes, and 2639 edges. The minimal, average and maximal degree for this graph were 2, $3.8$ and 84, respectively, and the graph density was $4.6×1{0}^{-3}$. We have used the SDGT algorithm with the switching ratio set to $0.08$ to transform this graph for several values of temperature. In each of these calculations 622 out of 711 intermediate nodes were detached using SGT, and the remaining 89 intermediate nodes were detached using the GT algorithm optimised for dense graphs (DGT).

An Arrhenius plot depicting the dependence of the rate constant on temperature is shown in Figure 4.14 (a). The running time of SDGT algorithm was $1.8×1{0}^{-2}$ seconds [this value was obtained by averaging over 10 runs and was the same for each SDGT run in Figure 4.14 (a)]. For comparison, the timings obtained using the SGT and DGT algorithms for the same problem were $2.0×1{0}^{-2}$ and $89.0×1{0}^{-2}$ seconds, respectively. None of the 43 total escape probabilities (one for every source) deviated from unity by more than $1{0}^{-5}$ for temperatures above $T=0.07$ (reduced units). For lower temperatures the probability was not conserved due to numerical imprecision.

The data obtained using SDGT method is compared with results from KMC simulation, which require increasingly more CPU time as the temperature is lowered. Figure 4.14 also shows the data for KMC simulations at temperatures $0.14$, $0.15$, $0.16$, $0.17$ and $0.18$. Each KMC simulation consisted of the generation of an ensemble of 1000 KMC trajectories from which the averages were computed. The cost of each KMC calculation is proportional to the average trajectory length, which is depicted in Figure 4.14 (b) as a function of the inverse temperature. The CPU timings for each of these calculations were (in the order of increasing temperature, averaged over five randomly seeded KMC simulations): $125$, $40$, $18$, $12$, and $7$ seconds. It can be seen that using GT method we were able to obtain kinetic data for a wider range of temperatures and with less computational expense.

#### 4.7.2 Internal Diffusion in LJ${}_{55}$

We have applied the graph transformation method to study the centre-to-surface atom migration in 55-atom Lennard-Jones cluster. The global potential energy minimum for LJ${}_{55}$ is a Mackay icosahedron, which exhibits special stability andmagic numberproperties [279280]. Centre-to-surface and surface-to-centre rates of migration of a tagged atom for this system were considered in previous work [10]. In Reference [10] the standard DPS procedure was applied to create and converge an ensemble of paths linking the structure of the global minimum with the tagged atom occupying the central position and structures where tagged atom is placed in sites that lie on fivefold and twofold symmetry axes (Figure 4.15). We have reused this sample in the present work.

The sample contained 9907 minima and 19384 transition states. We excluded transition states that facilitate degenerate rearrangements from consideration. For minima interconnected by more than one transition state we added the rate constants in each direction to calculate the branching probabilities. Four digraph representations were created with minimum degrees of 1, 2, 3 and 4 via iterative removal of the nodes with degrees that did not satisfy the requirement. These digraphs will be referred to as digraphs 1, 2, 3 and 4, respectively. The corresponding parameters are summarised in Table 4.1. Since the cost of the GT method does not depend on temperature we also quote CPU timings for the DGT, SGT and SDGT methods for each of these graphs in the last three columns of Table 4.1. Each digraph contained two source nodes labelled $1$ and $2$ and a single sink. The sink node corresponds to the global minimum with the tagged atom in the centre (Figure 4.15). It is noteworthy that the densities of the graphs corresponding to both our samples (LJ${}_{38}$ and LJ${}_{55}$) are significantly lower than the values predicted for a complete sample [115], which makes the use of sparse-optimised methods even more advantageous. From Table 4.1 it is clear that the SDGT approach is the fastest, as expected; we will use SDGT for all the rate calculations in the rest of this section.

For this sample KMC calculations are unfeasible at temperatures lower than about $T=0.3$ (Here $T$ is expressed in the units of $𝜖∕{k}_{B}$). Already for $T=0.4$ the average KMC trajectory length is $7.5×1{0}^{6}$ (value obtained by averaging over 100 trajectories). In previous work it was therefore necessary to use the DPS formalism, which invokes a steady-state approximation for the intervening minima, to calculate the rate constant at temperatures below $0.35$ [10]. Here we report results that are in direct correspondence with the KMC formulation of the problem, for temperatures as low as $0.1$.

Figure 4.16 presents Arrhenius plots that we calculated using the SDGT method for this system. The points in the green dataset are the results from seven SDGT calculations at temperatures $T\in \left\{0.3,0.35,\dots ,0.6\right\}$ conducted for each of the digraphs. The total escape probabilities, ${\Sigma }_{1}^{G}$ and ${\Sigma }_{2}^{G}$, calculated for each of the two sources at the end of the calculation deviated from unity by no more than $1{0}^{-5}$. For higher temperatures and smaller digraphs the deviation was smaller, being on the order of $1{0}^{-10}$ for digraph 4 at $T=0.4$, and improving at higher temperatures and/or smaller graph sizes.

At temperatures lower than $0.3$ the probability deviated by more than $1{0}^{-5}$ due to numerical imprecision. This problem was partially caused by the round-off errors in evaluation of terms $1-{P}_{\alpha ,\beta }{P}_{\beta ,\alpha }$, which increase when ${P}_{\alpha ,\beta }{P}_{\beta ,\alpha }$ approaches unity. These errors can propagate and amplify as the evaluation proceeds. By writing

 $\begin{array}{ccc}\hfill {P}_{\alpha ,\beta }& =\hfill & 1-\sum _{\gamma \ne \alpha }{P}_{\gamma ,\beta }\equiv 1-{𝜖}_{\alpha ,\beta }\hfill \\ \hfill \\ \hfill \\ \hfill \mathit{and}\phantom{\rule{1em}{0ex}}{P}_{\beta ,\alpha }& =\hfill & 1-\sum _{\gamma \ne \beta }{P}_{\gamma ,\alpha }\equiv 1-{𝜖}_{\beta ,\alpha },\hfill \end{array}$ (4.44)

and then using

 $1-{P}_{\alpha ,\beta }{P}_{\beta ,\alpha }={𝜖}_{\alpha ,\beta }-{𝜖}_{\alpha ,\beta }{𝜖}_{\beta ,\alpha }+{𝜖}_{\beta ,\alpha }$ (4.45)

we were able to decrease $1-{\Sigma }_{\alpha }^{G}$ by several orders of magnitude at the expense of doubling the computational cost. The SDGT method with probability denominators evaluated in this fashion will be referred to as SDGT1.

Terms of the form $1-{P}_{\alpha ,\beta }{P}_{\beta ,\alpha }$ approach zero when nodes $\alpha$ and $\beta$ becomeeffectively’ (i.ewithin available precision) disconnected from the rest of the graph. If this condition is encountered in the intermediate stages of the calculation it could also mean that a larger subgraph of the original graph is effectively disconnected. The waiting time for escape if started from a node that belongs to this subgraph tends to infinity. If the probability of getting to such a node from any of the source nodes is close to zero the final answer may still fit into available precision, even though some of the intermediate values cannot. Obtaining the final answer in such cases can be problematic as division-by-zero exceptions may occur.

Another way to alleviate numerical problems at low temperatures is to stop round-off errors from propagation at early stages by renormalising the branching probabilities of affected nodes $\beta \in \mathit{Adj}\left[\gamma \right]$ after node $\gamma$ is detached. The corresponding check that the updated probabilities of node $\beta$ add up to unity could be inserted after line 33 of Algorithm B.4 (see Appendix B), and similarly for Algorithm B.3. A version of SDGT method with this modification will be referred to as SDGT2.

Both SDGT1 and SDGT2 have similarly scaling overheads relative to the SDGT method. We did not find any evidence for superiority of one scheme over another. For example, the SDGT calculation performed for digraph 4 at $T=0.2$ yielded ${\mathsc{𝒯}}^{G}={\mathsc{𝒯}}_{1}^{G}{W}_{1}+{\mathsc{𝒯}}_{2}^{G}{W}_{2}=6.4×1{0}^{-18}$, and precision was lost as both ${\Sigma }_{1}^{G}$ and ${\Sigma }_{2}^{G}$ were less than $1{0}^{-5}$. The SDGT1 calculation resulted in ${\mathsc{𝒯}}^{G}=8.7×1{0}^{-22}$ and ${\Sigma }_{1}^{G}={\Sigma }_{2}^{G}=1.0428$, while the SDGT2 calculation produced ${\mathsc{𝒯}}^{G}=8.4×1{0}^{-22}$ with ${\Sigma }_{1}^{G}={\Sigma }_{2}^{G}=0.99961$. The CPU time required to transform this graph using our implementations of the SDGT1 and SDGT2 methods was $0.76$ and $0.77$ seconds, respectively.

To calculate the rates at temperatures in the interval $\left[0.1,0.3\right]$ reliably we used an implementation of the SDGT2 method compiled with quadruple precision (SDGT2Q) (note that the architecture is the same as in other benchmarks, i.ewith 32 bit wide registers). The points in the blue dataset in Figure 4.16 are the results from 4 SDGT2Q calculations at temperatures $T\in \left\{0.10,0.35,\dots ,0.75\right\}$.

By using SDGT2Q we were also able to improve on the low-temperature results for LJ${}_{38}$ presented in the previous section. The corresponding data is shown in blue in Figure 4.14.

### 4.8 Summary

The most important result of this chapter is probably the graph transformation (GT) method. The method works with a digraph representation of a Markov chain and can be used to calculate the first moment of a distribution of the first-passage times, as well as the total transition probabilities for an arbitrary digraph with sets of sources and sinks that can overlap. The calculation is performed in a non-iterative and non-stochastic manner, and the number of operations is independent of the simulation temperature.

We have presented three implementations of the GT algorithm: sparse-optimised (SGT), dense-optimised (DGT), and hybrid (SDGT), which is a combination of the first two. The SGT method uses a Fibonacci heap min-priority queue to determine the order in which the intermediate nodes are detached to achieve slower growth of the graph density and, consequently, better performance. SDGT is identical to DGT if the graph is dense. For sparse graphs SDGT performs better then SGT because it switches to DGT when the density of a graph being transformed approaches the maximum. We find that for SDGT method performs well both for sparse and dense graphs. The worst case asymptotic scaling of SDGT is cubic.

We have also suggested two versions of the SDGT method that can be used in calculations where a greater degree of precision is required. The code that implements SGT, DGT, SDGT, SDGT1 and SDGT2 methods is available for download [274].

The connection between the DPS and KMC approaches was discussed in terms of digraph representations of Markov chains. We showed that rate constants obtained using the KMC or DPS methods can be computed using graph transformation. We have presented applications to the isomerisation of the LJ${}_{38}$ cluster and the internal diffusion in the LJ${}_{55}$ cluster. Using the GT method we were able to calculate rate constants at lower temperatures than was previously possible, and with less computational expense.

We also obtained analytic expressions for the total transition probabilities for arbitrary digraphs in terms of combinatorial sums over pathway ensembles. It is hoped that these results will help in further theoretical pursuits, e.gthese aimed at obtaining higher moments of the distribution of the first passage times for arbitrary Markov chains.

Finally, we showed that the recrossing contribution to the DPS rate constant of a given discrete pathway can be calculated exactly. We presented a comparison between a sparse-optimised matrix multiplication method and a sparse-optimised version of Algorithm B.1 and showed that it is beneficial to use Algorithm B.1 because it is many orders of magnitude faster, runs in linear time and has constant memory requirements.