next up previous contents
Next: Discrete Path Sampling Up: Introduction Previous: Graph Theory Representation of   Contents


The Kinetic Monte Carlo Method

The KMC method can be used to generate a memoryless (Markovian) random walk and hence a set of trajectories connecting initial and final states in a DPS database. Many trajectories are necessary to collect appropriate statistics. Examples of pathway averages that are usually obtained with KMC are the mean path length and the mean first passage time. Here the KMC trajectory length is the number of states (local minima of the PES in the current context) that the walker encounters before reaching the final state. The first passage time is defined as the time that elapses before the walker reaches the final state. For a given KMC trajectory the first passage time is calculated as the sum of the mean waiting times in each of the states encountered.

Within the canonical Metropolis Monte Carlo approach a step is always taken if the proposed move lowers the energy, while steps that raise the energy are allowed with a probability that decreases with the energy difference between the current and proposed states [32]. An efficient way to propagate KMC trajectories was suggested by Bortz, Kalos, and Lebowitz (BKL) [190]. According to the BKL algorithm, a step is chosen in such a way that the ratios between transition probabilities of different events are preserved, but rejections are eliminated. Figure 4.1 explains this approach for a simple discrete-time Markov chain. The evolution of an ordinary KMC trajectory is monitored by the `time' parameter $ n$, $ n\in\mathbb{W}$, which is incremented by one every time a transition from any state is made. The random walker is in state $ 1$ at time $ n=0$. The KMC trajectory is terminated whenever an absorbing state is encountered. As $ P_{1,1}$ approaches unity transitions out of state $ 1$ become rare. To ensure that every time a random number is generated (one of the most time consuming steps in a KMC calculation) a move is made to a neighbouring state we average over the transitions from state $ 1$ to itself to obtain the Markov chain depicted in Figure 4.1 (b).

Figure: BKL algorithm for propagating a KMC trajectory applied to a three-state Markov chain. (a) The transition state diagram is shown where states and transitions are represented by circles and directed arrows, respectively. The Markov chain is parametrised by transition probabilities $ P_{\alpha ,1}$, $ P_{\beta ,1}$ and $ P_{1,1}$. Absorbing states $ \alpha $ and $ \beta $ are shaded. If $ P_{1,1}$ is close to unity the KMC trajectory is likely to revisit state $ 1$ many times before going to $ \alpha $ or $ \beta $. (b) State $ 1$ is replaced with state $ 1'$. The new Markov chain is parametrised by transition probabilities $ P_{\alpha ,1'}$, $ P_{\beta ,1'}$ and the average time for escape from state $ 1$, $ \tau _1$. Transitions from state $ 1'$ to itself are forbidden. Every time state $ 1'$ is visited the simulation `clock' is incremented by $ \tau _1$.
\begin{psfrags}
\psfrag{nav} [bc][bc]{$\tau_1$}
\psfrag{1} [bc][bc]{$1$}
\...
...g{B} [bc][bc]{(b)}
\centerline{\includegraphics{markov/BKL.eps}}
\end{psfrags}
Transitions from state $ 1$ to itself can be modelled by a Bernoulli process [33] with the probability of success equal to $ P_{1,1}$. The average time for escape from state $ 1$ is obtained as

$\displaystyle \tau_1 = (1-P_{1,1})\dsum _{n=0}^{\infty} (n+1) (P_{1,1})^n = \dfrac{1}{(1-P_{1,1})},$ (6.2)

which can be used as a measure of the efficiency of trapping [203]. Transition probabilities out of state $ 1$ are renormalised:

\begin{displaymath}\begin{array}{lll} P_{\alpha,1'} &=& \dfrac{P_{\alpha,1}}{1-P...
...} P_{\beta,1'} &=& \dfrac{P_{\beta ,1}}{1-P_{1,1}}. \end{array}\end{displaymath} (6.3)

Similar ideas underlie the accelerated Monte Carlo algorithm suggested by Novotny [26]. According to this `Monte Carlo with absorbing Markov chains' (MCAMC) method, at every step a Markov matrix, $ {\bf P}$, is formed, which describes the transitions in a subspace $ S$ that contains the current state $ \alpha $ , and a set of adjacent states that the random walker is likely to visit from $ \alpha $. A trajectory length, $ n$, for escape from $ S$ is obtained by bracketing a uniformly distributed random variable, $ r$, as

$\displaystyle \sum_{\beta} \left[{\bf P}^{n}\right]_{\beta,\alpha} < r \leqslant \sum_{\beta} \left[{\bf P}^{n-1}\right]_{\beta,\alpha}.$ (6.4)

Then an $ n$-step leapfrog move is performed to one of the states $ \gamma \notin S$ and the simulation clock is incremented by $ n$. State $ \gamma $ is chosen at random with probability

$\displaystyle \left[{\bf R}{\bf P}^{n-1}\right]_{\gamma,\alpha}/ \sum_{\gamma} \left[{\bf R}{\bf P}^{n-1}\right]_{\gamma,\alpha},$ (6.5)

where $ R_{\gamma,\alpha}$ is the transition probability from state $ \alpha \in S$ to state $ \gamma \notin S$. Both the BKL and MCAMC methods can be many orders of magnitude faster than the standard KMC method when kinetic traps are present.

In chemical kinetics transitions out of a state are described using a Poisson process, which can be considered a continuous-time analogue of Bernoulli trials. The transition probabilities are determined from the rates of the underlying transitions as

$\displaystyle P_{j,i} = \dfrac{k_{j,i}}{\displaystyle \sum_{\alpha} k_{\alpha,i}}.$ (6.6)

There may be several escape routes from a given state. Transitions from any state to directly connected states are treated as competing independent Poisson processes, which together generate a new Poisson distribution [179]. $ n$ independent Poisson processes with rates $ k_1,\ k_2,\ k_3,\dots,\ k_n$ combine to produce a Poisson process with rate $ k=\sum_{i=1}^{n} k_i$. The waiting time for a transition to occur to any connected state is then exponentially distributed as $ k\exp(-kt)$ [204].

Given the exponential distribution of waiting times the mean waiting time in state $ i$ before escape, $ \tau_i$, is $ 1/\sum_j k_{j,i}$ and the variance of the waiting time is simply $ \tau_i^2$. Here $ k_{j,i}$ is the rate constant for transitions from $ i$ to $ j$. When the average of the distribution of times is the property of interest, and not the distribution of times itself, it is sufficient to increment the simulation time by the mean waiting time rather than by a value drawn from the appropriate distribution [9,205]. This modification to the original KMC formulation [207,206] reduces the cost of the method and accelerates the convergence of KMC averages without affecting the results.


next up previous contents
Next: Discrete Path Sampling Up: Introduction Previous: Graph Theory Representation of   Contents
Semen A Trygubenko 2006-04-10