The Gamma Method

Here we skect the so-called Gamma method; more details can be found in Ref [1] which has been the base for many of the routines implemented in this library.

Autocorrelated data

Let us imagine to have a generic set of primary observables \(\{A_\alpha\}\) measured on \(R\) replicas of the same ensemble, each having \(N_r\) configurations with index \(i\).

We denote with \(N\) the total number of configurations per ensemble \(N=\sum_r N_r\). The measurements are labeled as \(a_{\alpha}^{i,r}\) and their mean value and fluctuations are defined as

\[\bar a_\alpha^r = \frac{1}{N_r} \sum_{i=1}^{N_r} a_\alpha^{i,r} \quad \bar{\bar a}_\alpha = \frac{1}{N} \sum_{r=1}^R N_r \bar a_\alpha^r = \frac{1}{N} \sum_{r=1}^R \sum_{i=1}^{N_r} a_\alpha^{i,r} \quad \quad \delta a_\alpha^{i,r} = (a_\alpha^{i,r} - A_\alpha)\]

The autocorrelation function is defined as

\[\langle \delta a_\alpha^{i,r} \ \delta a_\beta^{j,s} \rangle = \delta_{rs} \Gamma_{\alpha \beta} (j-i)\]

At the practical level the autocorrelation function is computed as

\[\Gamma_{\alpha \beta} (t) = \frac{1}{N-Rt} \sum_{r=1}^R \sum_{i=1}^{N_r -t} (a_\alpha^{i,r} - \bar{\bar a}_\alpha)(a_\beta^{i+t,r} - \bar{\bar a}_\beta) + O(1/N)\]

To be as general as possible we also need to account for situations where the measurements are taken on irregular Monte-Carlo series, e.g. with holes and missing configurations. For this reason it is convenient to consider the normalized fluctuations

\[\begin{split}\pi_\alpha^{i,r} = \left\lbrace \begin{array}{ll} 0 & \mbox{if } a_\alpha^{i,r} \mbox{ on config } i \mbox{ is missing} \\ (a_\alpha^{i,r} - \bar{\bar a}_\alpha) & \mbox{otherwise} \end{array} \right. \quad\end{split}\]

such that the time series given by \(\pi_\alpha^{i,r}\) is now contiguous over \(N_r\) configurations, including the zero fluctuations (\(N_m\) is the number of measurements at disposal).

To take into account the holes in the time series we must change the weigth factor \(1/(N-Rt)\) accordingly. To do so we consider the following quantity

\[\begin{split}\hat \pi_\alpha^{i,r} = \left\lbrace \begin{array}{ll} 0 & \mbox{if } a_\alpha^{i,r} \mbox{ on config } i \mbox{ is missing} \\ 1 & \mbox{otherwise} \end{array} \right. \quad\end{split}\]

and we proceed to the calculation of the autocorrelation function .. The calculation of the autocorrelation function therefore becomes

\[\Gamma_{\alpha \beta} (t) = \frac{ \sum_{r=1}^R \sum_{i=1}^{N_r -t} \pi_\alpha^{i,r} \pi_\beta^{i+t,r} }{ \sum_{r=1}^R \sum_{i=1}^{N_r -t} \hat \pi_\alpha^{i,r} \hat \pi_\beta^{i+t,r} }\]

and its integral defines the covariance matrix

\[C_{\alpha \beta} = \sum_{t=-\infty}^\infty \Gamma_{\alpha \beta} (t)\]

The error of a given observable can be read off from the diagonal elements of the covariance matrix, which can be re-expressed in terms of the integrated autocorrelation time

\[\sigma^2 (a_\alpha) = \frac{C_{\alpha \alpha}}{N} = \frac{ 2 \tau_\mathrm{int} }{N} \Gamma_{\alpha \alpha}(0)\]
\[\tau_\mathrm{int\,, \alpha} = \frac{1}{2 \Gamma_{\alpha \alpha}(0)} \sum_{t=-\infty}^\infty \Gamma_{\alpha \alpha} (t)\]

It is often convenient to think in terms of the normalized autocorrelation function \(\rho(t) = \Gamma(t) / \Gamma(0)\), whose error can be computed according to [2] (for the case \(\alpha=\beta\))

\[\mathrm{var} \rho (t) \simeq \frac{1}{N} \sum_{k=1}^\infty \big[ \rho(k+t) + \rho(k-t) - 2 \rho(k) \rho(t) \big]^2\]

Note that for a derived observable, \(F(\{a\})\), the same equations hold with the only difference being in the fluctuations

\[\pi_F^{i,r} \equiv \sum_\alpha \frac{\partial F(\{a\})}{\partial a_\alpha} \bigg|_{a=\bar{\bar a}} \pi_\alpha^{i,r}\]

Automatic Window

In practice with a finite number of configurations it is not possible to sum the autocorrelation function up to infinity. Moreover the noise of \(\Gamma(t)\) grows with \(t\). As a compromise an optimal summation window \(W\) is chosen such that the error estimate is reliable.

In fact, it is possible to demonstrate that \(\Gamma\) is expected to be a sum of exponentials whose coefficients are observable dependent and whose arguments depend instead on the Markov Transition Matrix.

The automatic windowing procedure introduced in [1] defines a criterion for the optimal window \(W\), which is the first value for which \(g(W)\) is negative

\[g(W) = \exp [-W / \tau_W] - \tau_W /\sqrt{W N}\]
\[\tau_W = S_\tau \bigg[ \ln \bigg( \frac{\sum_W \rho(W) + 1}{\sum_W \rho(W)-1} \bigg) \bigg]^{-1}\]

A standard choice for the parameter \(S_\tau\) is 1.5

Exponential tails

To be implemented soon, from Ref [3].

References