Ben Gamari, Laura Dietz
Bayes-stack is a framework for inference on probabilistic graphical models. The framework supports hierarchical latent variable models, including Latent Dirichlet allocation and even more complex topic model derivatives. We focus on inference using blocked collapsed Gibbs sampling, but the framework is also suitable for other iterative update methods.
Bayes-stack is written for parallel environments running on multi-core machines. While many researchers see collapsed Gibbs sampling as a hindrance for parallelism, we embrace its robustness against mildly out-of-date state. In bayes-stack, a model is represented as blocks of jointly updated random variables. Each inference worker thread will repeatedly pick a block, fetch the current model state, and compute a new setting for its variables. It then pushes an update function to a thread responsible for updating the global state. This thread will accumulate state updates, committing them only periodically to manage memory bandwidth and cache pressure.
Unlike other approaches where sets of variables are evolved independently for several iterations, bayes-stack synchronizes the state after only few variables have been processed. This improves convergence properties while incurring minimal performance costs.
Bayes-stack comes with the network-topic-models which demonstrates use of the framework, providing several topic model implementations, including Latent Dirichlet Allocation (LDA), the shared taste model for social network analysis, and the citation influence model for citation graphs.
Haskell’s ability to capture abstraction without compromising performance has enabled us to preserve the purity of the model definition while safely utilize concurrency. Tools like GHC’s event log and Threadscope have been extremely helpful in evaluating the performance characteristics of the sampler.
Currently our focus is on improving scalability of the inference. While our inference approach should allow us to find a reasonable trade-off between data-sharing and performance, much work still remains to realize this potential.
We thank Simon Marlow for both his discussions concerning parallel performance tuning with GHC as well as his continuing work in pushing forward the state of high-performance concurrency in Haskell. Furthermore, we are excited about work surrounding Threadscope by Duncan Coutts, Peter Wortmann, and others.