Chapter 7 MCMC
NIMBLE provides a variety of paths to creating and executing an MCMC algorithm, which differ greatly in their simplicity of use, and also in the options available and customizability.
The most direct approach to invoking the MCMC engine is using the nimbleMCMC
function (Section 7.1). This oneline call creates and executes an MCMC, and provides a wide range of options for controlling the MCMC: specifying monitors, burnin, and thinning, running multiple MCMC chains with different initial values, and returning posterior samples, summary statistics, and/or a WAIC value. However, this approach is restricted to using NIMBLE’s default MCMC algorithm; further customization of, for example, the specific samplers employed, is not possible.
The lengthier and more customizable approach to invoking the MCMC engine on a particular NIMBLE model object involves the following steps:
(Optional) Create and customize an MCMC configuration for a particular model:
Use
configureMCMC
to create an MCMC configuration (see Section 7.2). The configuration contains a list of samplers with the node(s) they will sample.(Optional) Customize the MCMC configuration:
 Add, remove, or reorder the list of samplers (Section 7.10 and
help(samplers)
in R for details), including adding your own samplers (Section 15.5);  Change the tuning parameters or adaptive properties of individual samplers;
 Change the variables to monitor (record for output) and thinning intervals for MCMC samples.
 Add, remove, or reorder the list of samplers (Section 7.10 and
Use
buildMCMC
to build the MCMC object and its samplers either from the model (using default MCMC configuration) or from a customized MCMC configuration (Section 7.3).Compile the MCMC object (and the model), unless one is debugging and wishes to run the uncompiled MCMC.
Run the MCMC and extract the samples (Sections 7.4, 7.5 and 7.6).
Optionally, calculate the WAIC (Section 7.7).
Prior to version 0.8.0, NIMBLE provided two additional functions, MCMCsuite
and compareMCMCs
, to facilitate comparison of multiple MCMC algorithms, either internal or external to NIMBLE. Those capabilities have been redesigned and moved into a separate package called compareMCMCs
.
Endtoend examples of MCMC in NIMBLE can be found in Sections 2.52.6 and Section 7.11.
7.1 Oneline invocation of MCMC: nimbleMCMC
The most direct approach to executing an MCMC algorithm in NIMBLE is using nimbleMCMC
. This single function can be used to create an underlying model and associated MCMC algorithm, compile both of these, execute the MCMC, and return samples, summary statistics, and a WAIC value. This approach circumvents the longer (and more flexible) approach using nimbleModel
, configureMCMC
, buildMCMC
, compileNimble
, and runMCMC
, which is described subsequently.
The nimbleMCMC
function provides control over the:
 number of MCMC iterations in each chain;
 number of MCMC chains to execute;
 number of burnin samples to discard from each chain;
 thinning interval on which samples should be recorded;
 model variables to monitor and return posterior samples;
 initial values, or a function for generating initial values for each chain;
 setting the random number seed;
 returning posterior samples as a matrix or a
coda
mcmc
object;  returning posterior summary statistics; and
 returning a WAIC value calculated using postburnin samples from all chains.
This entry point for using nimbleMCMC
is the code
, constants
, data
, and inits
arguments that are used for building a NIMBLE model (see Chapters 5 and 6). However, when using nimbleMCMC
, the inits
argument can also specify a list of lists of initial values that will be used for each MCMC chain, or a function that generates a list of initial values, which will be generated at the onset of each chain. As an alternative entry point, a NIMBLE model
object can also be supplied to nimbleMCMC
, in which case this model will be used to build the MCMC algorithm.
Based on its arguments, nimbleMCMC
optionally returns any combination of
 Posterior samples,
 Posterior summary statistics, and
 WAIC value.
The above are calculated and returned for each MCMC chain, using the postburnin and thinned samples. Additionally, posterior summary statistics are calculated for all chains combined when multiple chains are run.
Several example usages of nimbleMCMC
are shown below:
< nimbleCode({
code ~ dnorm(0, sd = 1000)
mu ~ dunif(0, 1000)
sigma for(i in 1:10)
~ dnorm(mu, sd = sigma)
x[i]
})< list(x = c(2, 5, 3, 4, 1, 0, 1, 3, 5, 3))
data < function() list(mu = rnorm(1,0,1), sigma = runif(1,0,10))
initsFunction
# execute one MCMC chain, monitoring the "mu" and "sigma" variables,
# with thinning interval 10. fix the random number seed for reproducible
# results. by default, only returns posterior samples.
< nimbleMCMC(code = code, data = data, inits = initsFunction,
mcmc.out monitors = c("mu", "sigma"), thin = 10,
niter = 20000, nchains = 1, setSeed = TRUE)
# note that the inits argument to nimbleModel must be a list of
# initial values, whereas nimbleMCMC can accept inits as a function
# for generating new initial values for each chain.
< initsFunction()
initsList < nimbleModel(code, data = data, inits = initsList)
Rmodel
# using the existing Rmodel object, execute three MCMC chains with
# specified burnin. return samples, summary statistics, and WAIC.
< nimbleMCMC(model = Rmodel,
mcmc.out niter = 20000, nchains = 3, nburnin = 2000,
summary = TRUE, WAIC = TRUE)
# run ten chains, generating random initial values for each
# chain using the inits function specified above.
# only return summary statistics from each chain; not all the samples.
< nimbleMCMC(model = Rmodel, nchains = 10, inits = initsFunction,
mcmc.out samples = FALSE, summary = TRUE)
See help(nimbleMCMC)
for further details.
7.2 The MCMC configuration
The MCMC configuration contains information needed for building an MCMC. When no customization is needed, one can jump directly to the buildMCMC
step below. An MCMC configuration is an object of class MCMCconf
, which includes:
 The model on which the MCMC will operate
 The model nodes which will be sampled (updated) by the MCMC
 The samplers and their internal configurations, called control parameters
 Two sets of variables that will be monitored (recorded) during execution of the MCMC and thinning intervals for how often each set will be recorded. Two sets are allowed because it can be useful to monitor different variables at different intervals
7.2.1 Default MCMC configuration
Assuming we have a model named Rmodel
, the following will generate a default MCMC configuration:
< configureMCMC(Rmodel) mcmcConf
The default configuration will contain a single sampler for each node in the model, and the default ordering follows the topological ordering of the model.
7.2.1.1 Default assignment of sampler algorithms
The default sampler assigned to a stochastic node is determined by the following, in order of precedence:
 If the node has no data nodes in its entire downstream dependency network, a
posterior_predictive
sampler is assigned. This sampler updates the node in question and all downstream stochastic nodes by simulating new values from each node’s conditional distribution. As of version 0.13.0, the operation of theposterior_predictive
sampler has changed in order to improve MCMC mixing. See Section 7.2.1.2.  If the node has a conjugate relationship between its prior distribution and the distributions of its stochastic dependents, a
conjugate
(`Gibbs’) sampler is assigned.  If the node follows a multinomial distribution, then a
RW_multinomial
sampler is assigned. This is a discrete randomwalk sampler in the space of multinomial outcomes.  If a node follows a Dirichlet distribution, then a
RW_dirichlet
sampler is assigned. This is a random walk sampler in the space of the simplex defined by the Dirichlet.  If a node follows an LKJ correlation distribution, then a
RW_block_lkj_corr_cholesky
sampler is assigned. This is a block random walk sampler in a transformed space where the transformation uses the signed stickbreaking approach described in Section 10.12 of Stan Development Team (2021b).  If the node follows any other multivariate distribution, then a
RW_block
sampler is assigned for all elements. This is a MetropolisHastings adaptive randomwalk sampler with a multivariate normal proposal (Roberts and Sahu 1997).  If the node is binaryvalued (strictly taking values 0 or 1), then a
binary
sampler is assigned. This sampler calculates the conditional probability for both possible node values and draws the new node value from the conditional distribution, in effect making a Gibbs sampler.  If the node is otherwise discretevalued, then a
slice
sampler is assigned (R. M. Neal 2003).  If none of the above criteria are satisfied, then a
RW
sampler is assigned. This is a MetropolisHastings adaptive randomwalk sampler with a univariate normal proposal distribution.
Details of each sampler and its control parameters can be found by invoking help(samplers)
.
7.2.1.2 Sampling posterior predictive nodes
A posterior predictive node is a node that is not itself data and has no data nodes in its entire downstream (descendant) dependency network. Such nodes play no role in inference for model parameters but have often been included in BUGS models to accomplish posterior predictive checks and similar calculations.
As of version 0.13.0, NIMBLE’s handling of posterior predictive nodes in MCMC sampling has changed in order to improve MCMC mixing. Samplers for nodes that are not posterior predictive nodes no longer condition on the values of the posterior predictive nodes. This produces a valid MCMC over the posterior distribution marginalizing over the posterior predictive nodes. This MCMC will generally mix better than an MCMC that conditions on the values of posterior predictive nodes, by reducing the dimensionality of the parameter space and removing the dependence between the sampled nodes and the posterior predictive nodes. At the end of each MCMC iteration, the posterior predictive nodes are sampled by posterior_predictive
sampler(s) based on their conditional distribution(s).
7.2.1.3 Options to control default sampler assignments
Very basic control of default sampler assignments is provided via two arguments to configureMCMC
. The useConjugacy
argument controls whether conjugate samplers are assigned when possible, and the multivariateNodesAsScalars
argument controls whether scalar elements of multivariate nodes are sampled individually. See help(configureMCMC)
for usage details.
7.2.1.4 Default monitors
The default MCMC configuration includes monitors on all toplevel stochastic nodes of the model. Only variables that are monitored will have their samples saved for use outside of the MCMC. MCMC configurations include two sets of monitors, each with different thinning intervals. By default, the second set of monitors (monitors2
) is empty.
7.2.1.5 Automated parameter blocking
The default configuration may be replaced by one generated from an automated parameter blocking algorithm. This algorithm determines groupings of model nodes that, when jointly sampled with a RW_block
sampler, increase overall MCMC efficiency. Overall efficiency is defined as the effective sample size of the slowestmixing node divided by computation time. This is done by:
< configureMCMC(Rmodel, autoBlock = TRUE) autoBlockConf
Note that this using autoBlock = TRUE
compiles and runs MCMCs, progressively exploring different sampler assignments, so it takes some time and generates some output. It is most useful for determining effective blocking strategies that can be reused for later runs. The additional control argument autoIt
may also be provided to indicate the number of MCMC samples to be used in each trial of the automated blocking procedure (default 20,000).
7.2.2 Customizing the MCMC configuration
The MCMC configuration may be customized in a variety of ways, either through additional named arguments to configureMCMC
or by calling methods of an existing MCMCconf
object.
7.2.2.1 Controlling which nodes to sample
One can create an MCMC configuration with default samplers on just a particular set of nodes using the nodes
argument to configureMCMC
. The value for the nodes
argument may be a character vector containing node and/or variable names. In the case of a variable name, a default sampler will be added for all stochastic nodes in the variable. The order of samplers will match the order of nodes
. Any deterministic nodes will be ignored.
If a data node is included in nodes
, it will be assigned a sampler. This is the only way in which a default sampler may be placed on a data node and will result in overwriting data values in the node.
7.2.2.2 Creating an empty configuration
If you plan to customize the choice of all samplers, it can be useful to obtain a configuration with no sampler assignments at all. This can be done by any of nodes = NULL
, nodes = character()
, or nodes = list()
.
7.2.2.3 Overriding the default sampler control list values
The default values of control list elements for all sampling algorithms may be overridden through use of the control
argument to configureMCMC
, which should be a named list.
Named elements in the control
argument will be used for all default samplers and any subsequent sampler added via addSampler
(see below). For example, the following will create the default MCMC configuration, except all RW
samplers will have their initial scale
set to 3, and none of the samplers (RW
, or otherwise) will be adaptive.
< configureMCMC(Rmodel, control = list(scale = 3, adaptive = FALSE)) mcmcConf
When adding samplers to a configuration using addSampler
, the default control list can also be overridden.
7.2.2.4 Adding samplers to the configuration: addSampler
Additional samplers may be added to a configuration using the addSampler
method of the MCMCconf
object. The addSampler
method has two modes of operation, determined by the default
argument.
When default = TRUE
, addSampler
will assign NIMBLE’s default sampling algorithm for each node specified following the same protocol as configureMCMC
. This may include conjugate samplers, multivariate samplers, or otherwise, also using additional arguments to guide the selection process (for example, useConjugacy
and multivariateNodesAsScalars
). In this mode of operation, the type
argument is not used.
When default = FALSE
, or when this argument is omitted, addSampler
uses the type
argument to specify the precise sampler to assign. Instances of this particular sampler are assigned to all nodes specified. The type
argument may be provided as a character string or a nimbleFunction object. Valid character strings are indicated in help(samplers)
(do not include "sampler_"
). Added samplers can be labeled with a name
argument, which is used in output of printSamplers
. Writing a new sampler as a nimbleFunction is covered in Section 15.5.
Regardless of the mode of operation, nodes are specified using either the target
or the nodes
argument. The target
argument does not undergo expansion to constituent nodes (unless default = TRUE
), and thus only a single sampler is added. The nodes
argument is always expanded to the underlying nodes, and separate samplers are added for each node. Eithe argument is provided as a character vector. Newly added samplers will be appended to the end of current sampler list. Adding a sampler for a node will not remove existing samplers operating on that node.
The hierarchy of precedence for control list elements for added samplers is:
 The
control
list argument provided toaddSampler
;  The original
control
list argument provided toconfigureMCMC
;  The default values, as defined in the sampling algorithm
setup
function.
See help(addSampler)
for more details.
7.2.2.5 Printing, reordering, modifying and removing samplers: printSamplers, removeSamplers, setSamplers, and getSamplerDefinition
The current, ordered, list of all samplers in the MCMC configuration may be printed by calling the printSamplers
method. When you want to see only samplers acting on specific model nodes or variables, provide those names as an argument to printSamplers
. The printSamplers
method accepts arguments controlling the level of detail displayed as discussed in its R help information.
# Print all samplers
$printSamplers()
mcmcConf
# Print all samplers operating on node "a[1]",
# or any of the "beta[]" variables
$printSamplers(c("a[1]", "beta"))
mcmcConf
# Print all conjugate and slice samplers
$printSamplers(type = c("conjugate", "slice"))
mcmcConf
# Print all RW samplers operating on "x"
$printSamplers("x", type = "RW")
mcmcConf
# Print the first 100 samplers
$printSamplers(1:100)
mcmcConf
# Print all samplers in their order of execution
$printSamplers(executionOrder = TRUE) mcmcConf
Samplers may be removed from the configuration object using removeSamplers
, which accepts a character vector of node or variable names, or a numeric vector of indices.
# Remove all samplers acting on "x" or any component of it
$removeSamplers("x")
mcmcConf
# Remove all samplers acting on "alpha[1]" and "beta[1]"
$removeSamplers(c("alpha[1]", "beta[1]"))
mcmcConf
# Remove the first five samplers
$removeSamplers(1:5)
mcmcConf
# Providing no argument removes all samplers
$removeSamplers() mcmcConf
Samplers to retain may be specified reordered using setSamplers
, which also accepts a character vector of node or variable names, or a numeric vector of indices.
# Set the list of samplers to those acting on any components of the
# model variables "x", "y", or "z".
$setSamplers(c("x", "y", "z"))
mcmcConf
# Set the list of samplers to only those acting on model nodes
# "alpha[1]", "alpha[2]", ..., "alpha[10]"
$setSamplers("alpha[1:10]")
mcmcConf
# Truncate the current list of samplers to the first 10 and the 100th
$setSamplers(ind = c(1:10, 100)) mcmcConf
The nimbleFunction definition underlying a particular sampler may be viewed using the getSamplerDefinition
method, using the sampler index as an argument. A node name argument may also be supplied, in which case the definition of the first sampler acting on that node is returned. In all cases, getSamplerDefinition
only returns the definition of the first sampler specified either by index or node name.
# Return the definition of the third sampler in the mcmcConf object
$getSamplerDefinition(3)
mcmcConf
# Return the definition of the first sampler acting on node "x",
# or the first of any indexed nodes comprising the variable "x"
$getSamplerDefinition("x") mcmcConf
7.2.2.6 Customizing individual sampler configurations: getSamplers, setSamplers, setName, setSamplerFunction, setTarget, and setControl
Each sampler in an MCMCconf
object is represented by a sampler configuration as a samplerConf
object. Each samplerConf
is a reference class object containing the following (required) fields: name
(a character string), samplerFunction
(a valid nimbleFunction sampler), target
(the model node to be sampled), and control
(list of control arguments). The MCMCconf
method getSamplers
allows access to the samplerConf
objects. These can be modified and then passed as an argument to setSamplers
to overwrite the current list of samplers in the MCMC configuration object. However, no checking of the validity of this modified list is performed; if the list of samplerConf objects is corrupted to be invalid, incorrect behavior will result at the time of calling buildMCMC
. The fields of a samplerConf
object can be modified using the access functions setName(name)
, setSamplerFunction(fun)
, setTarget(target, model)
, and setControl(control)
.
Here are some examples:
# retrieve samplerConf list
< mcmcConf$getSamplers()
samplerConfList
# change the name of the first sampler
1]]$setName("newNameForThisSampler")
samplerConfList[[
# change the sampler function of the second sampler,
# assuming existance of a nimbleFunction 'anotherSamplerNF',
# which represents a valid nimbleFunction sampler.
2]]$setSamplerFunction(anotherSamplerNF)
samplerConfList[[
# change the 'adaptive' element of the control list of the third sampler
< samplerConfList[[3]]$control
control $adaptive < FALSE
control3]]$setControl(control)
samplerConfList[[
# change the target node of the fourth sampler
4]]$setTarget("y", model) # model argument required
samplerConfList[[
# use this modified list of samplerConf objects in the MCMC configuration
$setSamplers(samplerConfList) mcmcConf
7.2.2.7 Customizing the sampler execution order
The ordering of sampler execution can be controlled as well. This allows for sampler functions to execute multiple times within a single MCMC iteration, or the execution of different sampler functions to be interleaved with one another.
The sampler execution order is set using the function setSamplerExecutionOrder
, and the current ordering of execution is retrieved using getSamplerExecutionOrder
. For example, assuming the MCMC configuration object mcmcConf
contains five samplers:
# first sampler to execute twice, in succession:
$setSamplerExecutionOrder(c(1, 1, 2, 3, 4, 5))
mcmcConf
# first sampler to execute multiple times, interleaved:
$setSamplerExecutionOrder(c(1, 2, 1, 3, 1, 4, 1, 5))
mcmcConf
# fourth sampler to execute 10 times, only
$setSamplerExecutionOrder(rep(4, 10))
mcmcConf
# omitting the argument to setSamplerExecutionOrder()
# resets the ordering to each sampler executing once, sequentially
$setSamplerExecutionOrder()
mcmcConf
# retrieve the current ordering of sampler execution
< mcmcConf$getSamplerExecutionOrder()
ordering
# print the sampler functions in the order of execution
$printSamplers(executionOrder = TRUE) mcmcConf
7.2.2.8 Monitors and thinning intervals: printMonitors, getMonitors, setMonitors, addMonitors, resetMonitors and setThin
An MCMC configuration object contains two independent sets of variables to monitor, each with their own thinning interval: thin
corresponding to monitors
, and thin2
corresponding to monitors2
. Monitors operate at the variable level. Only entire model variables may be monitored. Specifying a monitor on a node, e.g., x[1]
, will result in the entire variable x
being monitored.
The variables specified in monitors
and monitors2
will be recorded (with thinning interval thin
) into objects called mvSamples
and mvSamples2
, contained within the MCMC object. These are both modelValues objects; modelValues are NIMBLE data structures used to store multiple sets of values of model variables^{17}. These can be accessed as the member data mvSamples
and mvSamples2
of the MCMC object, and they can be converted to matrices using as.matrix
or lists using as.list
(see Section 7.6).
Monitors may be added to the MCMC configuration either in the original call to configureMCMC
or using the addMonitors
method:
# Using an argument to configureMCMC
< configureMCMC(Rmodel, monitors = c("alpha", "beta"),
mcmcConf monitors2 = "x")
# Calling a member method of the mcmcconf object
# This results in the same monitors as above
$addMonitors("alpha", "beta")
mcmcConf$addMonitors2("x") mcmcConf
A new set of monitor variables can be added to the MCMC configuration, overwriting the current monitors, using the setMonitors
method:
# Replace old monitors, now monitor "delta" and "gamma" only
$setMonitors("gamma", "delta") mcmcConf
Similarly, either thinning interval may be set at either step:
# Using an argument to configureMCMC
< configureMCMC(Rmodel, thin = 1, thin2 = 100)
mcmcConf
# Calling a member method of the mcmcConf object
# This results in the same thinning intervals as above
$setThin(1)
mcmcConf$setThin2(100) mcmcConf
The current lists of monitors and thinning intervals may be displayed using the printMonitors
method. Both sets of monitors (monitors
and monitors2
) may be reset to empty character vectors by calling the resetMonitors
method. The methods getMonitors
and getMonitors2
return the currently specified monitors
and monitors2
as character vectors.
7.2.2.9 Monitoring model logprobabilities
To record model logprobabilities from an MCMC, one can add monitors for logProb variables (which begin with the prefix logProb_
) that correspond to variables with (any) stochastic nodes. For example, to record and extract logprobabilities for the variables alpha
, sigma_mu
, and Y
:
< configureMCMC(Rmodel, enableWAIC = TRUE)
mcmcConf $addMonitors("logProb_alpha", "logProb_sigma_mu", "logProb_Y")
mcmcConf< buildMCMC(mcmcConf)
Rmcmc < compileNimble(Rmodel)
Cmodel < compileNimble(Rmcmc, project = Rmodel)
Cmcmc $run(10000)
Cmcmc< as.matrix(Cmcmc$mvSamples) samples
The samples
matrix will contain both MCMC samples and model logprobabilities.
7.3 Building and compiling the MCMC
Once the MCMC configuration object has been created, and customized to one’s liking, it may be used to build an MCMC function:
< buildMCMC(mcmcConf) Rmcmc
buildMCMC
is a nimbleFunction. The returned object Rmcmc
is an instance of the nimbleFunction specific to configuration mcmcConf
(and of course its associated model).
Note that if you would like to be able to calculate the WAIC of the model, you should usually set enableWAIC = TRUE
as an argument toconfigureMCMC
(or to buildMCMC
if not using configureMCMC
), or set nimbleOptions(MCMCenableWAIC = TRUE)
, which will enable WAIC calculations for all subsequently built MCMC functions. For more information on WAIC calculations, including situations in which you can calculate WAIC without having set enableWAIC = TRUE
see Section 7.7 or help(waic)
in R.
When no customization is needed, one can skip configureMCMC
and simply provide a model object to buildMCMC
. The following two MCMC functions will be identical:
< configureMCMC(Rmodel) # default MCMC configuration
mcmcConf < buildMCMC(mcmcConf)
Rmcmc1
< buildMCMC(Rmodel) # uses the default configuration for Rmodel Rmcmc2
For speed of execution, we usually want to compile the MCMC function to C++ (as is the case for other NIMBLE functions). To do so, we use compileNimble
. If the model has already been compiled, it should be provided as the project
argument so the MCMC will be part of the same compiled project. A typical compilation call looks like:
< compileNimble(Rmcmc, project = Rmodel) Cmcmc
Alternatively, if the model has not already been compiled, they can be compiled together in one line:
< compileNimble(Rmodel, Rmcmc) Cmcmc
Note that if you compile the MCMC with another object (the model in this case), you’ll need to explicitly refer to the MCMC component of the resulting object to be able to run the MCMC:
$Rmcmc$run(niter = 1000) Cmcmc
7.4 Userfriendly execution of MCMC algorithms: runMCMC
Once an MCMC algorithm has been created using buildMCMC
, the function runMCMC
can be used to run multiple chains and extract posterior samples, summary statistics and/or a WAIC value. This is a simpler approach to executing an MCMC algorithm, than the process of executing and extracting samples as described in Sections 7.5 and 7.6.
runMCMC
also provides several userfriendly options such as burnin, thinning, running multiple chains, and different initial values for each chain. However, using runMCMC
does not support several lowerlevel options, such as timing the individual samplers internal to the MCMC, continuing an existing MCMC run (picking up where it left off), or modifying the sampler execution ordering.
runMCMC
takes arguments that will control the following aspects of the MCMC:
 Number of iterations in each chain;
 Number of chains;
 Number of burnin samples to discard from each chain;
 Thinning interval for recording samples;
 Initial values, or a function for generating initial values for each chain;
 Setting the random number seed;
 Returning the posterior samples as a
coda
mcmc
object;  Returning summary statistics calculated from each chains; and
 Returning a WAIC value calculated using (postburnin) samples from all chains.
The following examples demonstrate some uses of runMCMC
, and assume the existence of Cmcmc
, a compiled MCMC algorithm.
# run a single chain, and return a matrix of samples
< runMCMC(Cmcmc)
mcmc.out
# run three chains of 10000 samples, discard initial burnin of 1000,
# record samples thereafter using a thinning interval of 10,
# and return of list of sample matrices
< runMCMC(Cmcmc, niter=10000, nburnin=1000, thin=10, nchains=3)
mcmc.out
# run three chains, returning posterior samples, summary statistics,
# and the WAIC value
< runMCMC(Cmcmc, nchains = 3, summary = TRUE, WAIC = TRUE)
mcmc.out
# run two chains, and specify the initial values for each
< list(list(mu = 1, sigma = 1),
initsList list(mu = 2, sigma = 10))
< runMCMC(Cmcmc, nchains = 2, inits = initsList)
mcmc.out
# run ten chains of 100,000 iterations each, using a function to
# generate initial values and a fixed random number seed for each chain.
# only return summary statistics from each chain; not all the samples.
< function()
initsFunction list(mu = rnorm(1,0,1), sigma = runif(1,0,100))
< runMCMC(Cmcmc, niter = 100000, nchains = 10,
mcmc.out inits = initsFunction, setSeed = TRUE,
samples = FALSE, summary = TRUE)
See help(runMCMC)
for further details.
7.5 Running the MCMC
The MCMC algorithm (either the compiled or uncompiled version) can be executed using the member method mcmc$run
(see help(buildMCMC)
in R). The run
method has one required argument, niter
, the number of iterations to be run.
The run
method has optional arguments nburnin
, thin
and thin2
. These can be used to specify the number of prethinning burnin samples to discard, and the postburnin thinning intervals for recording samples (corresponding to monitors
and monitors2
). If either thin
and thin2
are provided, they will override the thinning intervals that were specified in the original MCMC configuration object.
7.5.1 Rerunning versus restarting an MCMC
The run
method has an optional reset
argument. When reset = TRUE
(the default value), the following occurs prior to running the MCMC:
 All model nodes are checked and filled or updated as needed, in valid (topological) order. If a stochastic node is missing a value, it is populated using a call to
simulate
and its log probability value is calculated. The values of deterministic nodes are calculated from their parent nodes. If any righthandsideonly nodes (e.g., explanatory variables) are missing a value, an error results.  All MCMC sampler functions are reset to their initial state: the initial values of any sampler control parameters (e.g.,
scale
,sliceWidth
, orpropCov
) are reset to their initial values, as were specified by the original MCMC configuration.  The internal modelValues objects
mvSamples
andmvSamples2
are each resized to the appropriate length for holding the requested number of samples (niter/thin
, andniter/thin2
, respectively).
This means that one can begin a new run of an existing MCMC without having to rebuild or recompile the model or the MCMC. This can be helpful if one wants to use the same model and MCMC configuration, but with different initial values, different values of data nodes (but which nodes are data nodes must be the same), changes to covariate values(or other nondata, nonparameter values) in the model, or a different number of MCMC iterations, thinning interval or burnin.
In contrast, when mcmc$run(niter, reset = FALSE)
is called, the MCMC picks up from where it left off, continuing the previous chain and expanding the output as needed. No values in the model are checked or altered, and sampler functions are not reset to their initial states.
The run
method also has an optional resetMV
argument. This argument is only considered when'reset
is set to FALSE
. When mcmc$run(niter, reset = FALSE, resetMV = TRUE)
is called the internal modelValues objects mvSamples
and mvSamples2
are each resized to the appropriate length for holding the requested number of samples (niter/thin
, and niter/thin2
, respectively) and the MCMC carries on from where it left off. In other words, the previously obtained samples are deleted (e.g. to reduce memory usage) prior to continuing the MCMC. The default value of resetMV
is FALSE
.
7.5.2 Measuring sampler computation times: getTimes
If you want to obtain the computation time spent in each sampler, you can set time=TRUE
as a runtime argument and then use the method getTimes()
obtain the times. For example,
$run(niter, time = TRUE)
Cmcmc$getTimes() Cmcmc
will return a vector of the total time spent in each sampler, measured in seconds.
7.5.3 Assessing the adaption process of RW and RW_block samplers
If you’d like to see the evolution (over the iterations) of the acceptance proportion and proposal scale information, you can use some internal methods provided by NIMBLE, after setting two options to make the history accessible. Here we assume that cMCMC
is the compiled MCMC object and idx
is the numeric index of the sampler function you want to access from amongst the list of sampler functions that are part of the MCMC.
## set options to make history accessible
nimbleOptions(buildInterfacesForCompiledNestedNimbleFunctions = TRUE)
nimbleOptions(MCMCsaveHistory = TRUE)
## Next, set up and run your MCMC.
## Now access the history information:
$samplerFunctions[[idx]]$getScaleHistory()
Cmcmc$samplerFunctions[[idx]]$getAcceptanceHistory()
Cmcmc$samplerFunctions[[idx]]$getPropCovHistory() ## only for RW_block Cmcmc
Note that modifying elements of the control list may greatly
affect the performance of the RW_block
sampler. In particular, the sampler
can take a long time to find a good proposal covariance when the
elements being sampled are not on the same scale. We recommend
providing an informed value for ‘propCov’ in this case (possibly
simply a diagonal matrix that approximates the relative scales),
as well as possibly providing a value of ‘scale’ that errs on the
side of being too small. You may also consider decreasing
‘adaptFactorExponent’ and/or ‘adaptInterval’, as doing so has
greatly improved performance in some cases.
7.6 Extracting MCMC samples
After executing the MCMC, the output samples can be extracted as follows:
< mcmc$mvSamples
mvSamples < mcmc$mvSamples2 mvSamples2
These modelValues objects can be converted into matrices using as.matrix
or lists using as.list
:
< as.matrix(mvSamples)
samplesMatrix < as.list(mvSamples)
samplesList < as.matrix(mvSamples2)
samplesMatrix2 < as.list(mvSamples2) samplesList2
The column names of the matrices will be the node names of nodes in the monitored variables. Then, for example, the mean of the samples for node x[2]
could be calculated as:
mean(samplesMatrix[, "x[2]"])
The list version will contain an element for each variable that will be the size and shape of the variable with an additional index for MCMC iteration. By default MCMC iteration will be the first index, but including iterationAsLastIndex = TRUE
will make it the last index.
Obtaining samples as matrices or lists is most common, but see Section 14.1 for more about programming with modelValues objects, especially if you want to write nimbleFunctions to use the samples.
7.7 Calculating WAIC
The WAIC (Watanabe 2010) can be calculated from the posterior samples produced during the MCMC algorithm. Users have two options to calculate WAIC. The main approach requires enabling WAIC when setting up the MCMC and allows access to a variety of ways to calculate WAIC. This approach does not require that any specific monitors be set^{18} because the WAIC is calculated in an online manner, accumulating the necessary quantities to calculate WAIC as the MCMC is run.
The second approach allows one to calculate the WAIC after an MCMC has been run using an MCMC object or matrix (or dataframe) containing the posterior samples, but it requires the user to have correctly specified the monitored variables and only provides the default way to calculate WAIC. An advantage of the second approach is that one can specify additional burnin beyond that specified for the MCMC.
Here we first discuss the main approach and then close the section by showing the second approach. Specific details of the syntax are provided in help(waic)
.
To enable WAIC for the first approach, the argument enableWAIC = TRUE
must be supplied to configureMCMC
(or to buildMCMC
if not using configureWAIC
), or the MCMCenableWAIC
NIMBLE option must have been set to TRUE
.
The WAIC (as well as the pWAIC and lppd values) is extracted by the member method mcmc$getWAIC
(see help(waic)
in R for more details) or is available as the WAIC
element of the runMCMC
or nimbleMCMC
output lists. One can use the member method mcmc$getWAICdetails
for additional quantities related to WAIC that are discussed below.
Note that there is not a unique value of WAIC for a model. WAIC is calculated from Equations 5, 12, and 13 in Gelman, Hwang, and Vehtari (2014) (i.e., using WAIC2), as discussed in detail in Hug and Paciorek (2021). Therefore, NIMBLE provides user control over how WAIC is calculated in two ways.
First, by default NIMBLE provides the conditional WAIC, namely the version of WAIC where all parameters directly involved in the likelihood are treated as \(\theta\) for the purposes of Equation 5 from Gelman, Hwang, and Vehtari (2014). Users can request the marginal WAIC (see Ariyo et al. (2020)), namely the version of WAIC where latent variables are integrated over (i.e., using a marginal likelihood). This is done by providing the waicControl
list with a marginalizeNodes
element to configureMCMC
or buildMCMC
(when providing a model as the argument to buildMCMC
). See help(waic)
for more details.
Second, WAIC relies on a partition of the observations, i.e., ‘pointwise’ prediction. By default, in NIMBLE the sum over log pointwise predictive density values treats each data node as contributing a single value to the sum. When a data node is multivariate, that data node contributes a single value to the sum based on the joint density of the elements in the node. If one wants to group data nodes such that the joint density within each group is used, one can provide the waicControl
list with a dataGroups
element to configureMCMC
or buildMCMC
(when providing a model as the argument to buildMCMC
). See help(waic)
for more details.
Note that based on a limited set of simulation experiments in Hug and Paciorek (2021), our tentative recommendation is that users only use marginal WAIC if also using grouping.
Marginal WAIC requires using Monte Carlo simulation at each iteration of the MCMC to average over draws for the latent variables. To assess the stability of the marginal WAIC to the number of Monte Carlo iterations, one can examine how the WAIC changes with increasing iterations (up to the full number of iterations specified via the nItsMarginal
element of the waicControl
list) based on the WAIC_partialMC
, lppd_partialMC
, and pWAIC_partialMC
elements of the detailed WAIC output.
For comparing WAIC between two models, Vehtari, Gelman, and Gabry (2017) discuss using the perobservation (more generally, perdata group) contributions to the overall WAIC values to get an approximate standard error for the difference in WAIC between the models. These elementwise values are available as the WAIC_elements
, lppd_elements
, and pWAIC_elements
components of the detailed WAIC output.
The second overall approach to WAIC available in NIMBLE allows one to calculate WAIC post hoc after MCMC sampling using an MCMC object or matrix (or dataframe) containing posterior samples. Simply set up the MCMC and run it without enabling WAIC (but making sure to include in the monitors all stochastic parent nodes of the data nodes) and then use the function calculateWAIC
, as shown in this example:
< runMCMC(Cmcmc, niter = 10000, nburnin = 1000)
samples ## Using calculateWAIC with an MCMC object
calculateWAIC(Cmcmc)
## Using calculateWAIC with a matrix of samples
calculateWAIC(samples, model)
## Specifying additional burnin, so only last 5000 (1000+4000) iterations used
calculateWAIC(Cmcmc, burnin = 4000)
This approach only provides conditional WAIC without any grouping of data nodes (though one can achieve grouping by grouping data nodes into multivariate nodes).
7.8 kfold crossvalidation
The runCrossValidate
function in NIMBLE performs kfold crossvalidation on a nimbleModel
fit via MCMC. More information can be found by calling help(runCrossValidate)
.
7.9 Variable selection using Reversible Jump MCMC
A common method for Bayesian variable selection in regressionstyle problems is to define a space of different models and have MCMC sample over the models as well as the parameters for each model. The models differ in which explanatory variables are included. Often this idea is implemented in the BUGS language by writing the largest possible model and including indicator variables to turn regression parameters off and on (see model code below for an example of how indicator variables can be used). Foramally, that approach doesn’t sample between different models, instead embedding the models in one large model. However, that approach can result in poor mixing and require compromises in choices of prior distributions because a given regression parameter will sample from its prior when the corresponding indicator is 0. It is also computationally wasteful, because sampling effort will be spent on a coefficient even if it has no effect because the corresponding indicator is 0.
A different view of the problem is that the different combinations of coefficients represent models of different dimensions. Reversible Jump MCMC (Green 1995) (RJMCMC) is a general framework for MCMC simulation in which the dimension of the parameter space can vary between iterates of the Markov chain. The reversible jump sampler can be viewed as an extension of the MetropolisHastings algorithm onto more general state spaces. NIMBLE provides an implementation of RJMCMC for variable selection that requires the user to write the largest model of interest and supports use of indicator variables but formally uses RJMCMC sampling for better mixing and efficiency. When a coefficient is not in the model (or its indicator is 0), it will not be sampled, and it will therefore not follow its prior in that case.
In technical detail, given two models \(M_1\) and \(M_2\) of possibly different dimensions, the core idea of RJMCMC is to remove the difference in the dimensions of models \(M_1\) and \(M_2\) by supplementing the corresponding parameters \(\boldsymbol{\theta_1}\) and \(\boldsymbol{\theta_2}\) with auxiliary variables \(\boldsymbol{u}_{1 \rightarrow 2}\) and \(\boldsymbol{u}_{2 \rightarrow 1}\) such that \((\boldsymbol{\theta_1}, \boldsymbol{u}_{1 \rightarrow 2})\) and \((\boldsymbol{\theta_2}, \boldsymbol{u}_{2 \rightarrow 1})\) are in bijection, \((\boldsymbol{\theta_2}, \boldsymbol{u}_{2 \rightarrow 1}) = \Psi(\boldsymbol{\theta_1}, \boldsymbol{u}_{1 \rightarrow 2})\). The corresponding MetropolisHastings acceptance probability is generalized accounting for the proposal density for the auxiliary variables.
NIMBLE implements RJMCMC for variable selection using a univariate normal distribution centered on 0 (or some fixed value) as the proposal density for parameters being added to the model. Two ways to set up models for RJMCMC are supported, which differ by whether the inclusion probabilities for each parameter are assumed known or must be written in the model:
 If the inclusion probabilities are assumed known, then RJMCMC may be used with regular model code, i.e. model code written without heed to variable selection.
 If the inclusion probability is a parameter in the model, perhaps with its own prior, then RJMCMC requires that indicator variables be written in the model code. The indicator variables will be sampled using RJMCMC, but otherwise they are like any other nodes in the model.
The steps to set up variable selection using RJMCMC are:
 Write the model with indicator variables if needed.
 Configure the MCMC as usual with
configureMCMC
.  Modify the resulting MCMC configuration object with
configureRJ
.
The configureRJ
function modifies the MCMC configuration to (1) assign samplers that turn on and off variable in the model and (2) modify the existing samplers for the regression coefficients to use ‘toggled’ versions. The toggled versions invoke the samplers only when corresponding variable is currently in the model. In the case where indicator variables are included in the model, sampling to turn on and off variables is done using RJ_indicator
samplers. In the case where indicator variables are not included, sampling is done using RJ_fixed_prior
samplers.
In the following we provide an example for the two different model specifications.
7.9.1 Using indicator variables
Here we consider a normal linear regression in which two covariates x1
and x2
are candidates to be included in the model. The two corresponding coefficients are beta1
and beta2
. The indicator variables are z1
and z2
, and their inclusion probability is psi
. As described below, one can also use vectors for a set of coefficients and corresponding indicator variables.
## Linear regression with intercept and two covariates
< nimbleCode({
code ~ dnorm(0, sd = 100)
beta0 ~ dnorm(0, sd = 100)
beta1 ~ dnorm(0, sd = 100)
beta2 ~ dunif(0, 100)
sigma ~ dbern(psi) ## indicator variable associated with beta1
z1 ~ dbern(psi) ## indicator variable associated with beta2
z2 ~ dbeta(1, 1) ## hyperprior on inclusion probability
psi for(i in 1:N) {
< beta0 + beta1 * z1 * x1[i] + beta2 * z2 * x2[i]
Ypred[i] ~ dnorm(Ypred[i], sd = sigma)
Y[i]
}
})
## simulate some data
set.seed(1)
< 100
N < runif(N, 1, 1)
x1 < runif(N, 1, 1) ## this covariate is not included
x2 < rnorm(N, 1.5 + 2 * x1, sd = 1)
Y
## build the model and configure default MCMC
< nimbleModel(code, constants = list(N = N),
RJexampleModel data = list(Y = Y, x1 = x1, x2 = x2),
inits = list(beta0 = 0, beta1 = 0, beta2 = 0,
sigma = sd(Y), z2 = 1, z1 = 1, psi = 0.5))
< configureMCMC(RJexampleModel) RJexampleConf
## For illustration, look at the default sampler assignments
$printSamplers() RJexampleConf
## [1] conjugate_dnorm_dnorm_additive sampler: beta0
## [2] conjugate_dnorm_dnorm_linear sampler: beta1
## [3] conjugate_dnorm_dnorm_linear sampler: beta2
## [4] RW sampler: sigma
## [5] conjugate_dbeta_dbern_identity sampler: psi
## [6] binary sampler: z1
## [7] binary sampler: z2
At this point we can modify the MCMC configuration object, RJexampleConf
, to use reversible jump samplers for selection on beta1
and beta2
.
configureRJ(conf = RJexampleConf,
targetNodes = c("beta1", "beta2"),
indicatorNodes = c('z1', 'z2'),
control = list(mean = c(0, 0), scale = 2))
The targetNodes
argument gives the coefficients (nodes) for which we want to do variable selection. The indicatorNodes
gives the corresponding indicator nodes, ordered to match targetNodes
. The control
list gives the means and scale (standard deviation) for normal reversible jump proposals for targetNodes
. The means will typically be 0 (by default mean = 0
and scale = 1
), but they could be anything. An optional control
element called fixedValue
can be provided in the nonindicator setting; this gives the value taken by nodes in targetNodes
when they are out of the model (by default, fixedValue
is 0).
All control
elements can be scalars or vectors. A scalar values will be used for all targetNodes
. A vector value must be of equal length as targetNodes
and will be used in order.
To use RJMCMC on a vector of coefficients with a corresponding vector of indicator variables, simply provide the variable names for targetNodes
and indicatorNodes
. For example, targetNodes = "beta"
is equivalent to targetNodes = c("beta[1]", "beta[2]")
and indicatorNodes = "z"
is equivalent to indicatorNodes = c("z[1]", "z[2]")
. Expansion of variable names into a vector of node names will occur as described in Section 13.3.1.1. When using this method, both arguments must be provided as variable names to be expanded.
Next we can see the result of configureRJ
by printing the modified list of samplers:
$printSamplers() RJexampleConf
## [1] conjugate_dnorm_dnorm_additive sampler: beta0
## [2] RW sampler: sigma
## [3] conjugate_dbeta_dbern_identity sampler: psi
## [4] RJ_indicator sampler: z1, mean: 0, scale: 2, targetNode: beta1
## [5] RJ_toggled sampler: beta1, samplerType: conjugate_dnorm_dnorm_linear
## [6] RJ_indicator sampler: z2, mean: 0, scale: 2, targetNode: beta2
## [7] RJ_toggled sampler: beta2, samplerType: conjugate_dnorm_dnorm_linear
An RJ_indicator
sampler was assigned to each of z[1]
and z[2]
in place of the binary
sampler, while the samplers for beta[1]
and beta[2]
have been changed to RJ_toggled
samplers. The latter samplers contain the original samplers, in this case conjugate_dnorm_dnorm
samplers, and use them only when the corresponding indicator variable is equal to \(1\) (i.e., when the coefficient is in the model).
Notice that the order of the samplers has changed, since configureRJ
calls removeSampler
for nodes in targetNodes
and indicatorNodes
, and subsequently addSampler
, which appends the sampler to the end of current sampler list. Order can be modified by using setSamplers
.
Also note that configureRJ
modifies the MCMC configuration (first argument) and returns NULL
.
7.9.2 Without indicator variables
We consider the same regression setting, but without the use of indicator variables and with fixed probabilities of including each coefficient in the model.
## Linear regression with intercept and two covariates
< nimbleCode({
code ~ dnorm(0, sd = 100)
beta0 ~ dnorm(0, sd = 100)
beta1 ~ dnorm(0, sd = 100)
beta2 ~ dunif(0, 100)
sigma for(i in 1:N) {
< beta0 + beta1 * x1[i] + beta2 * x2[i]
Ypred[i] ~ dnorm(Ypred[i], sd = sigma)
Y[i]
}
})
## build the model
< nimbleModel(code, constants = list(N = N),
RJexampleModel2 data = list(Y = Y, x1 = x1, x2 = x2),
inits = list(beta0 = 0, beta1 = 0, beta2 = 0,
sigma = sd(Y)))
< configureMCMC(RJexampleModel2) RJexampleConf2
## print NIMBLE default samplers
$printSamplers() RJexampleConf2
## [1] conjugate_dnorm_dnorm_additive sampler: beta0
## [2] conjugate_dnorm_dnorm_linear sampler: beta1
## [3] conjugate_dnorm_dnorm_linear sampler: beta2
## [4] RW sampler: sigma
In this case, since there are no indicator variables, we need to pass to configureRJ
the prior inclusion probabilities for each node in targetNodes
, by specifying either one common value or a vector of values for the argument priorProb
. This case does not allow for a stochastic prior.
configureRJ(conf = RJexampleConf2,
targetNodes = c("beta1", "beta2"),
priorProb = 0.5,
control = list(mean = 0, scale = 2, fixedValue = c(1.5, 0)))
## print samplers after configureRJ
$printSamplers() RJexampleConf2
## [1] conjugate_dnorm_dnorm_additive sampler: beta0
## [2] RW sampler: sigma
## [3] RJ_fixed_prior sampler: beta1, priorProb: 0.5, mean: 0, scale: 2, fixedValue: 1.5
## [4] RJ_toggled sampler: beta1, samplerType: conjugate_dnorm_dnorm_linear, fixedValue: 1.5
## [5] RJ_fixed_prior sampler: beta2, priorProb: 0.5, mean: 0, scale: 2, fixedValue: 0
## [6] RJ_toggled sampler: beta2, samplerType: conjugate_dnorm_dnorm_linear, fixedValue: 0
Since there are no indicator variables, the RJ_fixed_prior
sampler is assigned directly to each of beta[1]
and beta[2]
along with the RJ_toggled
sampler. The former sets a coefficient to its fixedValue
when it is out of the model. The latter invokes the regular sampler for the coefficient only if it is in the model at a given iteration.
If fixedValue
is given when using indicatorNodes
the values provided in fixedValue
are ignored. However the same behavior can be obtained in this situation, using a different model specification. For example, the model in 7.9.1 can be modified to have beta1
equal to \(1.5\) when not in the model as follows:
for(i in 1:N) {
< beta0 + (1  z1) * 1.5 * beta1 * x1[i] +
Ypred[i] * beta1 * x1[i] + beta2 * z2 * x2[i]
z1 ~ dnorm(Ypred[i], sd = sigma)
Y[i] }
7.10 Samplers provided with NIMBLE
Most documentation of MCMC samplers provided with NIMBLE can be found by invoking help(samplers)
in R. Here we provide additional explanation of conjugate samplers and how complete customization can be achieved by making a sampler use an arbitrary loglikelihood function, such as to build a particle MCMC algorithm.
7.10.1 Conjugate (‘Gibbs’) samplers
By default, configureMCMC()
and buildMCMC()
will assign conjugate samplers to all nodes satisfying a conjugate relationship, unless the option useConjugacy = FALSE
is specified.
The current release of NIMBLE supports conjugate sampling of the relationships listed in Table 7.1^{19}.
Prior Distribution  Sampling (Dependent Node) Distribution  Parameter 

Beta  Bernoulli  prob 
Binomial  prob 

Negative Binomial  prob 

Dirichlet  Multinomial  prob 
Categorical  prob 

Flat  Normal  mean 
Lognormal  meanlog 

Gamma  Poisson  lambda 
Normal  tau 

Lognormal  taulog 

Gamma  rate 

Inverse Gamma  scale 

Exponential  rate 

Double Exponential  rate 

Weibull  lambda 

Halfflat  Normal  sd 
Lognormal  sdlog 

Inverse Gamma  Normal  var 
Lognormal  varlog 

Gamma  scale 

Inverse Gamma  rate 

Exponential  scale 

Double Exponential  scale 

Normal  Normal  mean 
Lognormal  meanlog 

Multivariate Normal  Multivariate Normal  mean 
Wishart  Multivariate Normal  prec 
Inverse Wishart  Multivariate Normal  cov 
Conjugate sampler functions may (optionally) dynamically check that their own posterior likelihood calculations are correct. If incorrect, a warning is issued. However, this functionality will roughly double the runtime required for conjugate sampling. By default, this option is disabled in NIMBLE. This option may be enabled with
nimbleOptions(verifyConjugatePosteriors = TRUE)
.
If one wants information about conjugate node relationships for other purposes, they can be obtained using the checkConjugacy
method on a model. This returns a named list describing all conjugate relationships. The checkConjugacy
method can also accept a character vector argument specifying a subset of node names to check for conjugacy.
7.10.2 Customized loglikelihood evaluations: RW_llFunction sampler
Sometimes it is useful to control the loglikelihood calculations used for an MCMC updater instead of simply using the model. For example, one could use a sampler with a loglikelihood that analytically (or numerically) integrates over latent model nodes. Or one could use a sampler with a loglikelihood that comes from a stochastic approximation such as a particle filter (see below), allowing composition of a particle MCMC (PMCMC) algorithm (Andrieu, Doucet, and Holenstein 2010). The RW_llFunction
sampler handles this by using a MetropolisHastings algorithm with a normal proposal distribution and a userprovided loglikelihood function. To allow compiled execution, the loglikelihood function must be provided as a specialized instance of a nimbleFunction. The loglikelihood function may use the same model as the MCMC as a setup argument (as does the example below), but if so the state of the model should be unchanged during execution of the function (or you must understand the implications otherwise).
The RW_llFunction
sampler can be customized using the control
list argument to set the initial proposal distribution scale and the adaptive properties for the MetropolisHastings sampling. In addition, the control
list argument must contain a named llFunction
element. This is the specialized nimbleFunction that calculates the loglikelihood; it must accept no arguments and return a scalar double number. The return value must be the total loglikelihood of all stochastic dependents of the target
nodes – and, if includesTarget = TRUE
, of the target node(s) themselves – or whatever surrogate is being used for the total loglikelihood. This is a required control
list element with no default. See help(samplers)
for details.
Here is a complete example:
< nimbleCode({
code ~ dunif(0, 1)
p ~ dbin(p, n)
y
})
< nimbleModel(code, data = list(y=3), inits = list(p=0.5, n=10))
Rmodel
< nimbleFunction(
llFun setup = function(model) { },
run = function() {
< model$y
y < model$p
p < model$n
n < lfactorial(n)  lfactorial(y)  lfactorial(ny) +
ll * log(p) + (ny) * log(1p)
y returnType(double())
return(ll[1])
}
)
< llFun(Rmodel)
RllFun
< configureMCMC(Rmodel, nodes = NULL)
mcmcConf
$addSampler(target = "p", type = "RW_llFunction",
mcmcConfcontrol = list(llFunction = RllFun, includesTarget = FALSE))
< buildMCMC(mcmcConf) Rmcmc
Note that we need to return ll[1]
and not just ll
because there are no scalar variables in compiled models. Hence y
and other variables, and therefore ll
, are of dimension 1 (i.e., vectors / onedimensional arrays), so we need to specify the first element in order to have the return type be a scalar.
7.10.3 Particle MCMC sampler
For state space models, a particle MCMC (PMCMC) sampler can be specified for toplevel parameters. This sampler is described in Section 8.1.2.
7.11 Detailed MCMC example: litters
Here is a detailed example of specifying, building, compiling, and running two MCMC algorithms. We use the litters
example from the BUGS examples.
###############################
##### model configuration #####
###############################
# define our model using BUGS syntax
< nimbleCode({
litters_code for (i in 1:G) {
~ dgamma(1, .001)
a[i] ~ dgamma(1, .001)
b[i] for (j in 1:N) {
~ dbin(p[i,j], n[i,j])
r[i,j] ~ dbeta(a[i], b[i])
p[i,j]
}< a[i] / (a[i] + b[i])
mu[i] < 1 / (a[i] + b[i])
theta[i]
}
})
# list of fixed constants
< list(G = 2,
constants N = 16,
n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13,
10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6,
10, 10, 10, 7), nrow = 2))
# list specifying model data
< list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9,
data 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5 , 5, 3, 7, 3,
7, 0), nrow = 2))
# list specifying initial values
< list(a = c(1, 1),
inits b = c(1, 1),
p = matrix(0.5, nrow = 2, ncol = 16),
mu = c(.5, .5),
theta = c(.5, .5))
# build the R model object
< nimbleModel(litters_code,
Rmodel constants = constants,
data = data,
inits = inits)
###########################################
##### MCMC configuration and building #####
###########################################
# generate the default MCMC configuration;
# only wish to monitor the derived quantity "mu"
< configureMCMC(Rmodel, monitors = "mu")
mcmcConf
# check the samplers assigned by default MCMC configuration
$printSamplers()
mcmcConf
# doublecheck our monitors, and thinning interval
$printMonitors()
mcmcConf
# build the executable R MCMC function
< buildMCMC(mcmcConf)
mcmc
# let's try another MCMC, as well,
# this time using the crossLevel sampler for toplevel nodes
# generate an empty MCMC configuration
# we need a new copy of the model to avoid compilation errors
< Rmodel$newModel()
Rmodel2 < configureMCMC(Rmodel2, nodes = NULL, monitors = "mu")
mcmcConf_CL
# add two crossLevel samplers
$addSampler(target = c("a[1]", "b[1]"), type = "crossLevel")
mcmcConf_CL$addSampler(target = c("a[2]", "b[2]"), type = "crossLevel")
mcmcConf_CL
# let's check the samplers
$printSamplers()
mcmcConf_CL
# build this second executable R MCMC function
< buildMCMC(mcmcConf_CL)
mcmc_CL
###################################
##### compile to C++, and run #####
###################################
# compile the two copies of the model
< compileNimble(Rmodel)
Cmodel < compileNimble(Rmodel2)
Cmodel2
# compile both MCMC algorithms, in the same
# project as the R model object
# NOTE: at this time, we recommend compiling ALL
# executable MCMC functions together
< compileNimble(mcmc, project = Rmodel)
Cmcmc < compileNimble(mcmc_CL, project = Rmodel2)
Cmcmc_CL
# run the default MCMC function,
# and example the mean of mu[1]
$run(1000)
Cmcmc< as.matrix(Cmcmc$mvSamples) # alternative: as.list
cSamplesMatrix mean(cSamplesMatrix[, "mu[1]"])
# run the crossLevel MCMC function,
# and examine the mean of mu[1]
$run(1000)
Cmcmc_CL< as.matrix(Cmcmc_CL$mvSamples)
cSamplesMatrix_CL mean(cSamplesMatrix_CL[, "mu[1]"])
###################################
#### run multiple MCMC chains #####
###################################
# run 3 chains of the crossLevel MCMC
< runMCMC(Cmcmc_CL, niter=1000, nchains=3)
samplesList
lapply(samplesList, dim)
7.12 Comparing different MCMCs with MCMCsuite and compareMCMCs
Please see the compareMCMCs
package for the features previously provided by MCMCsuite
and compareMCMCs
in NIMBLE (until version 0.8.0). The compareMCMCs
package provides tools to automatically run MCMC in nimble (including multiple sampler configurations), WinBUGS, OpenBUGS, JAGS, Stan, or any other engine for which you provide a simple common interface. The package makes it easy to manage comparison metrics and generate html pages with comparison figures.
7.13 Running MCMC chains in parallel
It is possible to run multiple chains in parallel using standard R parallelization packages such as parallel
, foreach
, and future
. However, you must create separate copies of all model and MCMC objects using nimbleModel
, buildMCMC
, compileNimble
, etc. This is because NIMBLE uses Reference Classes and R6 classes, so copying such objects simply creates a new variable name that refers to the original object.
Thus, in your parallel loop or lapplystyle statement, you should run nimbleModel
and all subsequent calls to create and compile the model and MCMC algorithm within the parallelized block of code, once for each MCMC chain being run in parallel.
For a worked example, please see the parallelization example on our webpage.