next up previous contents
Next: Graph Transformation Method Up: ENSEMBLES OF REARRANGEMENT PATHWAYS Previous: Chain Graphs   Contents


Complete Graphs

In a complete digraph each pair of nodes is connected by two oppositely directed edges [155]. The complete graph with $ N$ graph nodes is denoted $ K_N=(V,E)$, and has $ N$ nodes and $ N(N-1)$ edges, remembering that we have two edges per connection (Figure 4.4).

Figure: Complete graphs $ K_2$ and $ K_3$, depicted as the subgraphs of a larger graph. Visible sink nodes are shaded.
\begin{psfrags}
\psfrag{1} [bc][bc]{$1$}
\psfrag{2} [bc][bc]{$2$}
\psfrag{...
...ncludegraphics[width=.52\textheight]{markov/complete_graphs.eps}}
\end{psfrags}
Due to the complete connectivity we need only consider two cases: when the starting and finishing nodes are the same and when they are distinct. We employ complete graphs for the purposes of generality. An arbitrary graph $ G_N$ is a subgraph of $ K_N$ with transition probabilities for non-existent edges set to zero. All the results in this section are therefore equally applicable to arbitrary graphs.

The complete graph $ K_2$ will not be considered here as it is topologically identical to the graph $ C_2$. The difference between the $ K_3$ and $ C_3$ graphs is the existence of edges that connect nodes $ 1$ and $ 3$. Pathways confined to $ K_3$ can therefore contain cycles, and for a given path length they are significantly more numerous (Figure 4.5).

Figure: The growth of the number of pathways with the pathway length for $ K_3$ and $ C_3$. The starting node is chosen arbitrarily for $ K_3$ while for $ C_3$ the we start at one of the terminal nodes. Any node adjacent to $ K_3$ or $ C_3$ is a considered to be a sink and for simplicity we consider only one escape route from every node. Note the log$ _{10}$ scale on the vertical axis.
\begin{psfrags}
\psfrag{K3} [bc][bc]{$K_3$}
\psfrag{C3} [bc][bc]{$C_3$}
\p...
...ncludegraphics[width=.47\textheight]{markov/number_of_paths.eps}}
\end{psfrags}
The $ \mathcal{S}_{\alpha,\beta}^{K_3}$ can again be derived analytically for this graph:

\begin{displaymath}\begin{array}{lll} \mathcal{S}_{1,1}^{K_3} &=& \displaystyle\...
...,3,2}-\mW _{1,3,1} -\mW _{1,2,3,1}-\mW _{1,3,2,1}}. \end{array}\end{displaymath} (6.30)

The results for any other possibility can be obtained by permuting the node indices appropriately.

The pathway sums $ \mathcal{S}^{K_N}_{\alpha,\beta}$ for larger complete graphs can be obtained by recursion. For $ \mathcal{S}_{N,N}^{K_N}$ any path leaving from and returning to $ N$ can be broken down into a step out of $ N$ to any $ i<N$, all possible paths between $ i$ and $ j<N-1$ within $ K_{N-1}$, and finally a step back to $ N$ from $ j$. All such paths can be combined together in any order, so we have a multinomial distribution [242]:

\begin{displaymath}\begin{array}{lll} \mathcal{S}_{N,N}^{K_N} &=& \displaystyle ...
...N,j}\mathcal{S}_{j,i}^{K_{N-1}}P_{i,N}\right)^{-1}. \end{array}\end{displaymath} (6.31)

To evaluate $ \mathcal{S}_{1,N}^{K_N}$ we break down the sum into all paths that depart from and return to $ N$, followed by all paths that leave node $ N$ and reach node 1 without returning to $ N$. The first contribution corresponds to a factor of $ \mathcal{S}_{N,N}^{K_N}$, and the second produces a factor $ P_{i,N}\mathcal{S}_{1,i}^{K_{N-1}}$ for every $ i<N$:

$\displaystyle \mathcal{S}_{1,N}^{K_N} = \mathcal{S}_{N,N}^{K_N}\sum_{i=1}^{N-1} \mathcal{S}_{1,i}^{K_{N-1}}P_{i,N},$ (6.32)

where $ \mathcal{S}_{1,1}^{K_1}$ is defined to be unity. Any other $ \mathcal{S}_{\alpha,\beta}^{K_N}$ can be obtained by a permutation of node labels.

Algorithm B.2 provides an example implementation of the above formulae optimised for incomplete graphs. The running time of Algorithm B.2 depends strongly on the graph density. (A digraph in which the number of edges is close to the maximum value of $ N(N-1)$ is termed a dense digraph [202].) For $ K_N$ the algorithm runs in $ \mathcal{O}(N^{2N})$ time, while for an arbitrary graph it scales as $ \mathcal{O}(d^{2N})$, where $ d$ is the average degree of the nodes. For chain graphs the algorithm runs in $ \mathcal{O}(N)$ time and therefore constitutes a recursive-function-based alternative to Algorithm B.1 with linear memory requirements. For complete graphs an alternative implementation with $ \mathcal{O}((N!)^2)$ scaling is also possible.

Although the scaling of the above algorithm with $ N$ may appear disastrous, it does in fact run faster than standard KMC and MM approaches for graphs where the escape probabilities are several orders of magnitude smaller than the transition probabilities (Algorithm B.2). Otherwise, for anything but moderately branched chain graphs, Algorithm B.2 is significantly more expensive. However, the graph-transformation-based method presented in Section 4.4 yields both the pathway sums and the mean escape times for a complete graph $ K_N$ in $ \mathcal{O}(N^3)$ time, and is the fastest approach that we have found.

Mean escape times for $ K_3$ are readily obtained from the results in Equation 4.30 by applying the method outlined in Section 4.1.5:

$\displaystyle \mT ^{K_3}_1 = \dfrac{ \tau_1 (1-\mW _{2,3,2} ) + \tau_2 (P_{2,1}...
...}) }{ 1-\mW _{1,2,1}-\mW _{2,3,2}-\mW _{3,1,3}-\mW _{1,2,3,1}-\mW _{1,3,2,1} }.$ (6.33)

We have verified this result analytically using first-step analysis and numerically for various values of the parameters $ \tau_i$ and $ P_{\alpha,\beta}$. and obtained quantitative agreement (see Figure 4.6).
Figure: Mean escape time from $ K_3$ as a function of the escape probability $ \mathcal{E}$. The transition probabilities for the $ K_3$ graph are parametrised by $ \mathcal{E}$ for simplicity: $ P_{i,j}=(1-\mathcal{E})/2$ for all $ i,j \in \{1,2,3\}$ and $ \mathcal{E}_{1}=\mathcal{E}_{2}=\mathcal{E}_{3}=\mathcal{E}$. Kinetic Monte Carlo data (triangles) was obtained by averaging over $ 100$ trajectories for each of 33 parameterisations. The solid line is the exact solution obtained using Equation 4.33. The units of $ \mathcal{T}^{K_3}$ are arbitrary. Note the log$ _{10}$ scale on the vertical axis.
\begin{psfrags}
\psfrag{Tau} [bc][bc]{$\log_{10}\mathcal{T}^{K_3}$}
\psfrag{...
...ncludegraphics[width=.47\textheight]{markov/K3_waiting_time.eps}}
\end{psfrags}
Figure 4.7 demonstrates how the advantage of exact summation over KMC and MM becomes more pronounced as the escape probabilities become smaller.
Figure: The computational cost of the kinetic Monte Carlo and matrix multiplication methods as a function of escape probability for $ K_3$ (see caption to Figure 4.6 for the definition of $ \mathcal{E}$). $ M$ is the number of matrix multiplications required to converge the value of the total probability of getting from node $ 1$ to nodes $ 1$, $ 2$ and $ 3$: the calculation was terminated when the change in the total probability between iterations was less than $ 10^{-3}$. The number of matrix multiplications $ M$ and the average trajectory length $ \left <l\right >$ can be used as a measure of the computational cost of the matrix multiplication and kinetic Monte Carlo approaches, respectively. The computational requirements of the exact summation method are independent of $ \mathcal{E}$. Note the log$ _{10}$ scale on the vertical axis.
\begin{psfrags}
\psfrag{M} [bc][bc]{$\log_{10}M$}
\psfrag{n} [bc][bc]{$\log_...
...graphics[width=.47\textheight]{markov/K3_computational_cost.eps}}
\end{psfrags}


next up previous contents
Next: Graph Transformation Method Up: ENSEMBLES OF REARRANGEMENT PATHWAYS Previous: Chain Graphs   Contents
Semen A Trygubenko 2006-04-10