Version 1.1.0 of NIMBLE released

We’ve released the newest version of NIMBLE on CRAN and on our website. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC,Laplace approximation, and SMC).

This release provides new functionality as well as various bug fixes and improved error trapping, including:

  • Improving our automatic differentiation (AD) system so it can be used in a wider range of  models, including models with stochastic indexing, discrete latent states, and CAR distributions. Support for AD for these models means that HMC sampling and Laplace approximation can be used.
  • Allowing distributions and functions (whether user-defined or built-in) that lack AD support (such as dinterval, dconstraint, and truncated distributions) to be used and compiled in AD-enabled models. The added flexibility increases the range of models in which one can use AD methods (HMC or Laplace) on some parts of a model and other samplers or methods on other parts.
  • Adding nimIntegrate to the NIMBLE language, providing one-dimensional numerical integration via adaptive quadrature, equivalent to R’s integrate. This can, for example, be used in a user-defined function or distribution for use in model code, such as to implement certain point process or survival models that involve a one-dimensional integral.
  • Adding a “prior samples” MCMC sampler, which uses an existing set of numerical samples to define the prior distribution of model node(s).
  • Better support of the dCRP distribution in non-standard model structures.
  • Adding error trapping to prevent accidental use of  C++ keywords as model variable names.
  • Removing the RW_multinomial MCMC sampler, which was found to generate incorrect posterior results (in cases when a latent state followed a multinomial distribution)
  • Fixing a bug in conjugacy checking in a case of subsets of multivariate nodes.
  • Fixing is.na and is.nan to operate in the expected vectorized fashion.
  • Improving documentation of AD, nimbleHMC, and nimbleSMC in the manual.
  • Updating Eigen (the C++ linear algebra library used by nimble) to version 3.4.0.

Please see the release notes on our website for more details.

nimbleHMC version 0.2.0 released, providing improved HMC performance

nimbleHMC provides Hamiltonian Monte Carlo samplers for use with NIMBLE, in particular NUTS samplers. NIMBLE’s HMC samplers can be flexibly assigned to a subset of model parameters, allowing users to consider various sampling configurations.

We’ve released version 0.2.0 of nimbleHMC, which includes a new default NUTS sampler inspired by Stan’s implementation of NUTS. It also provides an updated version of our previous NUTS sampler (which is based on the original Hoffman and Gelman paper, and is now called the ‘NUTS_classic’ sampler in NIMBLE) that fixes performance issues in version 0.1.1.

Version 1.0.1 of NIMBLE released, fixing a bug in version 1.0.0 affecting certain models

We’ve released the newest version of NIMBLE on CRAN and on our website. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC and SMC).
Version 1.0.1 follows shortly after 1.0.0 and fixes an issue and a bug introduced in version 1.0.0 causing data to be set incorrectly in certain models.
Both cases occur only when a variable (e.g., “x”) contains both stochastic nodes (e.g. “x[2] ~ <some distribution>”) and *either* deterministic nodes (e.g. “x[3] <- <some calculation>”) or right-hand-side-only nodes (e.g. “x[4]” appears only on the right-hand-side, like an explanatory value).
The issue involves a change of behavior (relative to previous nimble versions) when both setting data values for some nodes and initial values for other nodes within the same variable (that satisfies the previous condition). Data values for right-hand-side-only nodes were replaced by initial values (inits) if both were provided. Version 1.0.1 reverts to previous behavior that data values are not replaced by initial values in that situation.
The bug involves models where (for a variable satisfying the previous condition) not every scalar element within the variable is used as a node and some of the nodes in the variable are data. In that situation, data values may be set incorrectly. This could typically occur in models with autoregressive structure directly on some data nodes (such as may be the case for capture-recapture models involving many individual capture histories within the same variable, indexed by individual and time, with some individuals not present for the entire time series, resulting in unused scalar elements of the variable).
Please see the release notes on our website for more details.

Version 1.0.0 of NIMBLE released, providing automatic differentiation, Laplace approximation, and HMC sampling

We’ve released the newest version of NIMBLE on CRAN and on our website. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC and SMC).

Version 1.0.0 provides substantial new functionality. This includes:

  • A Laplace approximation algorithm that allows one to find the MLE for model parameters based on approximating the marginal likelihood in models with continuous random effects/latent process values.
  • A Hamiltonian Monte Carlo (HMC) MCMC sampler implementing the NUTS algorithm (available in the newly-released nimbleHMC package).
  • Support in NIMBLE’s algorithm programming system to obtain derivatives of functions and arbitrary calculations within models.
  • A parameter transformation system allowing algorithms to work in unconstrained parameter spaces when model parameters have constrained domains.

These are documented via the R help system and a new section at the end of our User Manual. We’re excited for users to try out the new features and let us know of their experiences. In particular, given these major additions to the NIMBLE system, we anticipate the possibility of minor glitches. The best place to reach out for support is still the nimble-users list.

In addition to the new functionality above, other enhancements and bug fixes include:

  • Fixing a bug (previously reported in a nimble-users message) giving incorrect results in NIMBLE’s cross-validation function (`runCrossValidate`) for all but the ‘predictive’ loss function for NIMBLE versions 0.10.0 – 0.13.2.
  • Fixing a bug in conjugacy checking causing incorrect identification of conjugate relationships in models with unusual uses of subsets, supersets, and slices of multivariate normal nodes.
  • Improving control of the `addSampler` method for MCMC.
  • Improving the WAIC system in a few small ways.
  • Enhancing error trapping and warning messages.

Please see the NEWS file in the package source for more details.

Version 0.13.1 of NIMBLE released

We’ve released the newest version of NIMBLE on CRAN and on our website. This version is purely a bug fix release that fixes a bug introduced in our new handling of predictive nodes in version 0.13.0 (released in November). If you installed version 0.13.0, please upgrade to 0.13.1.

Stay informed with ledger-live-ledger.com’s desktop tools and make informed decisions about your crypto investments.

Bug in newly-released version 0.13.0 affecting MCMC for models with predictive nodes

We recently released version 0.13.0, which has some improvements in how we handle predictive nodes in NIMBLE’s MCMC engine.

Unfortunately, we realized (thanks to a user post from a couple days ago) that there is a bug in this new approach to predictive nodes.

If you haven’t upgraded to version 0.13.0, simply wait to upgrade until we release a bug fix in 0.13.1 in the next couple weeks.

If you have upgraded to version 0.13.0 and if you have run an MCMC on a model that both (1) has predictive nodes and (2) has multivariate nodes, then the bug might affect your results. Please set:

  nimbleOptions(MCMCusePredictiveDependenciesInCalculations = TRUE)

and then reconfigure/rebuild and rerun your MCMC. The option above will ensure that the MCMC behaves as it would in previous versions of NIMBLE.

Unlock the full potential of your crypto assets with ledger live desktop investment tools, designed to maximize returns and minimize risks.

Version 0.13.0 of NIMBLE released

We’ve released the newest version of NIMBLE on CRAN and on our website. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC and SMC).
Version 0.13.0 provides new functionality (in particular improved handling of predictive nodes in MCMC) and minor bug fixes, including:

  • Thoroughly revamping handling of posterior predictive nodes in the MCMC system, in particular that MCMC samplers, by default, will now exclude predictive dependencies from internal sampler calculations. This should improve MCMC mixing for models with predictive nodes. Posterior predictive nodes are now sampled conditional on all other model nodes at the end of each MCMC iteration.
  • Adding functionality to the MCMC configuration system, including a new replaceSamplers method and updates to the arguments for the addSamplers method.
  • Adding an option to the WAIC system to allow additional burnin (in addition to standard MCMC burnin) before calculating online WAIC, thereby allowing inspection of initial samples without forcing them to be used for WAIC.
  • Warning users of unused constants during model building.
  • Fixing bugs that prevented use of variables starting with ‘logProb’ or named ‘i’ in model code.
  • Fixing a bug to prevent infinite recursion in particular cases in conjugacy checking.
  • Fixing a bug in simulating from dcar_normal nodes when multiple nodes passed to simulate.
Please see the release notes on our website for more details.

Experience the future of finance with Trezor Suite Decentralized Finance, empowering users globally.

NIMBLE virtual short course, January 4-6, 2023

We’ll be holding a virtual training workshop on NIMBLE, January 4-6, 2023 from 8 am to 1 pm US Pacific (California) time each day. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC and SMC).

Recently we added support for automatic differentiation (AD) to NIMBLE in a beta release, and the workshop will cover NIMBLE’s AD capabilities in detail.

The workshop will cover the following material:

  • the basic concepts and workflows for using NIMBLE and converting BUGS or JAGS models to work in NIMBLE.
  • overview of different MCMC sampling strategies and how to use them in NIMBLE, including Hamiltonian Monte Carlo (HMC).
  • writing new distributions and functions for more flexible modeling and more efficient computation.
  • tips and tricks for improving computational efficiency.
  • using advanced model components, including Bayesian non-parametric distributions (based on Dirichlet process priors), conditional auto-regressive (CAR) models for spatially correlated random fields, Laplace approximation, and reversible jump samplers for variable selection.
  • an introduction to programming new algorithms in NIMBLE.
  • use of automatic differentiation (AD) in algorithms.
  • calling R and compiled C++ code from compiled NIMBLE models or functions.

If you are interested in attending, please pre-register. Registration fees will be $125 (regular) or $50 (student).  We are also offering a process (see the pre-registration form) for students to request a fee waiver.

The workshop will assume attendees have a basic understanding of hierarchical/Bayesian models and MCMC, the BUGS (or JAGS) model language, and some familiarity with R.

Beta version of NIMBLE with automatic differentiation, including HMC sampling and Laplace approximation

We’re excited to announce that NIMBLE now supports automatic differentiation (AD), also known as algorithmic differentiation, in a beta version available on our website. In this beta version, NIMBLE now provides:

  • Hamiltonian Monte Carlo (HMC) sampling for an entire parameter vector or arbitrary subsets of the parameter vector (i.e., combined with other samplers for the remaining parameters). 
  • Laplace approximation for approximate integration over latent states in a model, allowing maximum likelihood estimation and MCMC based on the marginal likelihood (via the RW_llFunction samplers).
  • The ability for users and algorithm developers to write nimbleFunctions that calculate derivatives of functions, including many but not all mathematical operations that are supported in the NIMBLE language.

We’re making this beta release available to allow our users to test and evaluate the AD functionality and the new algorithms, but it is not recommended for production use at this stage. So please give it a try, and let us know of any problems or suggestions you have, either via the nimble-users list, bug reports to our GitHub repository, or email to nimble.stats@gmail.com

You can download the beta version and view an extensive draft manual for the AD functionality.

We plan to release this functionality in the next NIMBLE release on CRAN in the coming months. 

Version 0.12.2 of NIMBLE released, including an important bug fix for some models using Bayesian nonparametrics with the dCRP distribution

We’ve released the newest version of NIMBLE on CRAN and on our website. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC and SMC).

Version 0.12.2 is a bug fix release. In particular, this release fixes a bug in our Bayesian nonparametric distribution (BNP) functionality that gives incorrect MCMC results for some models, specifically when using the dCRP distribution when the parameters of the mixture components (i.e., the clusters) have hyperparameters (i.e., the base measure parameters) that are unknown and sampled during the MCMC. Here is an example basic model structure that is affected by the bug:

k[1:n] ~ dCRP(alpha, n)
for(i in 1:n) {
  y[i] ~ dnorm(mu[k[i]], 1)
  mu[i] ~ dnorm(mu0, 1) ## mixture component parameters with hyperparameter
}
mu0 ~ dnorm(0, 1) ## unknown cluster hyperparameter

(There is no problem without the hyperparameter layer – i.e., if mu0 is a fixed value – which is the situation in many models.)

We strongly encourage users using models with this type of structure to rerun their analyses, and we apologize for this issue.

Other changes in this release include:

  • Fixing an issue with reversible jump variable selection under a similar situation to the BNP issue discussed above (in particular where there are unknown hyperparameters of the regression coefficients being considered, which would likely be an unusual use case).
  • Fixing a bug preventing setup of conjugate samplers for dwishart or dinvwishart nodes when using dynamic indexing.
  • Fixing a bug preventing use of truncation bounds specified via `data` or `constants`.
  • Fixing a bug preventing MCMC sampling with the LKJ prior for 2×2 matrices.
  • Fixing a bug in `runCrossValidate` affecting extraction of multivariate nodes.
  • Fixing a bug producing incorrect subset assignment into logical vectors in nimbleFunction code.
  • Fixing a bug preventing use of `nimbleExternalCall` with a constant expression.
  • Fixing a bug preventing use of recursion in nimbleFunctions without setup code.

Please see the release notes on our website for more details.