Back to Index

 

I’ve written a new implementation of the NNGP, this time not requiring the \(N \times N\) matrices.

This new implementation can be viewed here: nngp2.R.

The NNGP and GP model code also in the nngp2.G file.

library(nimble)
library(MASS)
library(coda)

nimbleOptions(verbose = FALSE)

source('../code/nngp2.R')

We’ll look at the same “large” model, and a larger one yet.

 

Large Model

Using \(N=2000\) observed spatial locations, now using with \(k=15\) nearest-neighbors.

Use \(\sigma=3\), \(\tau=0.3\), and \(\psi=20\) for data generation.

N <- 2000 
k <- 15

input <- generateData(N, sigma=3, tau=.3, psi=20, smax=200, seed = 0) 

nID <- determineNeighbors(input$dst, k) 

constants <- list(N = N, k = k, ones = rep(1,N), dst = input$dst, nID = nID) 
data <- list(z = input$z) 
inits <- list(mu = 0, tau = 1, sigma = 1, psi = 1) 

niter <- 1000

time_NNGP <- system.time(
    samples_NNGP <- nimbleMCMC(code_NNGP, constants, data, inits,
                               niter = niter, setSeed = TRUE, progressBar = FALSE)
)

time_GP <- system.time(
    samples_GP <- nimbleMCMC(code_GP, constants, data, inits,
                             niter = niter, setSeed = TRUE, progressBar = FALSE)
)
## quick graphical look at posterior medians and CIs 
samplesList <- list(NNGP = samples_NNGP, GP = samples_GP) 
chainsPlot(samplesList, 'psi', nrows=1, jitter=.03, buffer.left=.3, buffer.right=.3) 

chainsPlot(samplesList, c('mu', 'sigma'), nrows=1, jitter=.1, buffer.left=.3, buffer.right=.3) 

chainsPlot(samplesList, 'tau', nrows=1, jitter=.03, buffer.left=.3, buffer.right=.3) 

Timing and Effective Sample Size

Compare raw runtime, effective sample size (ess), and efficiency (ess/runtime).

Raw effective sample size:

ess_NNGP <- apply(samples_NNGP, 2, effectiveSize) 
ess_NNGP 
##         mu        psi      sigma        tau 
## 190.173425   6.391096   5.162342 197.135110
ess_GP <- apply(samples_GP, 2, effectiveSize) 
ess_GP 
##         mu        psi      sigma        tau 
## 155.419679   7.000159   6.183347 199.075087

Runtime (shown in minutes):

time_NNGP[3] / 60
##  elapsed 
## 2.721083
time_GP[3] / 60
## elapsed 
## 31.7231

Efficiency (number of effectively independent posterior samples generated per second of algorithm runtime):

eff_NNGP <- ess_NNGP / time_NNGP[3]
eff_NNGP 
##         mu        psi      sigma        tau 
## 1.16481441 0.03914554 0.03161941 1.20745481
eff_GP <- ess_GP / time_GP[3]
eff_GP 
##          mu         psi       sigma         tau 
## 0.081654315 0.003677740 0.003248604 0.104589971

Finally, the ratio of efficiencies of NNGP/NP. That is, how many times more efficient is the NNGP relative to the full GP model?

round(eff_NNGP / eff_GP, 2) 
##    mu   psi sigma   tau 
## 14.27 10.64  9.73 11.54

 

Largest Model

Using \(N=5000\) observed spatial locations, still with \(k=15\) nearest-neighbors.

Again using \(\sigma=3\), \(\tau=0.3\), and \(\psi=20\) for data generation.

N <- 5000
k <- 15

input <- generateData(N, sigma=3, tau=.3, psi=20, smax=200, seed = 0) 

nID <- determineNeighbors(input$dst, k) 

constants <- list(N = N, k = k, ones = rep(1,N), dst = input$dst, nID = nID) 
data <- list(z = input$z) 
inits <- list(mu = 0, tau = 1, sigma = 1, psi = 1) 

niter <- 1000

time_NNGP <- system.time(
    samples_NNGP <- nimbleMCMC(code_NNGP, constants, data, inits,
                               niter = niter, setSeed = TRUE, progressBar = FALSE)
)

time_GP <- system.time(
    samples_GP <- nimbleMCMC(code_GP, constants, data, inits,
                             niter = niter, setSeed = TRUE, progressBar = FALSE)
)
## quick graphical look at posterior medians and CIs 
samplesList <- list(NNGP = samples_NNGP, GP = samples_GP) 
chainsPlot(samplesList, 'psi', nrows=1, jitter=.03, buffer.left=.3, buffer.right=.3) 

chainsPlot(samplesList, c('mu', 'sigma'), nrows=1, jitter=.1, buffer.left=.3, buffer.right=.3) 

chainsPlot(samplesList, 'tau', nrows=1, jitter=.03, buffer.left=.3, buffer.right=.3) 

Timing and Effective Sample Size

Compare raw runtime, effective sample size (ess), and efficiency (ess/runtime).

Raw effective sample size:

ess_NNGP <- apply(samples_NNGP, 2, effectiveSize) 
ess_NNGP 
##         mu        psi      sigma        tau 
## 169.487050   1.946544   1.825900 101.913847
ess_GP <- apply(samples_GP, 2, effectiveSize) 
ess_GP 
##         mu        psi      sigma        tau 
## 164.766322   3.050301   2.554996 166.206722

Runtime (shown in hours):

time_NNGP[3] / 60 / 60
##   elapsed 
## 0.1420203
time_GP[3] / 60 / 60
##  elapsed 
## 7.727673

Efficiency (number of effectively independent posterior samples generated per second of algorithm runtime):

eff_NNGP <- ess_NNGP / time_NNGP[3]
eff_NNGP 
##          mu         psi       sigma         tau 
## 0.331500098 0.003807250 0.003571282 0.199333520
eff_GP <- ess_GP / time_GP[3]
eff_GP 
##           mu          psi        sigma          tau 
## 5.922666e-03 1.096457e-04 9.184151e-05 5.974442e-03

Finally, the ratio of efficiencies of NNGP/NP. That is, how many times more efficient is the NNGP relative to the full GP model?

round(eff_NNGP / eff_GP, 2) 
##    mu   psi sigma   tau 
## 55.97 34.72 38.89 33.36

 

Conclusions

We’ve written a new implementation of the NNGP, which doesn’t require calculating (or passing around) \(N \times N\) matrices C, A, or D. Instead, these are reduced to size \(N \times k\).

The number of nearest neighbors being considered is now \(k=15\).

For the large model (\(N=2000\) observed spatial locations), the NNGP is between 10 and 15 times more efficient than the full GP. For the largest model (\(N=5000\) locations), which took 7 hours to generate 1,000 MCMC samples for the full GP model, the NNGP implementation is between 35 and 55 time more efficient than the full GP model.

Again, these results are pleasing. And again, it seems we’re in a position to start asking interesting questions, or talking about what direction we want to take this research.