Multilevel Modelling with Variational Inference¶
There have been two reasons for writing this notebook -
- To have a port of Multilevel modelling from PyMC3 to PyMC4.
- To test the Variational Inference API added this summer.
Radon contamination (Gelman and Hill 2006)¶
Radon is a radioactive gas that enters homes through contact points with the ground. It is a carcinogen that is the primary cause of lung cancer in non-smokers. Radon levels vary greatly from household to household. The EPA did a study of radon levels in 80,000 houses. There are two important predictors:
Measurement in basement or first floor (radon higher in basements)
Measurement of Uranium level available at county level
We will focus on modeling radon levels in Minnesota. The hierarchy in this example is households within county.
The model building has been inspired from TFP port of Multilevel modelling and the visualizations have been borrowed from PyMC3's Multilevel modelling.
import arviz as az
import logging
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc4 as pm
import tensorflow as tf
import xarray as xr
from tensorflow_probability import bijectors as tfb
logging.getLogger("tensorflow").setLevel(logging.ERROR)
%config InlineBackend.figure_format = 'retina'
RANDOM_SEED = 8927
np.random.seed(RANDOM_SEED)
az.style.use('arviz-darkgrid')
Let's fetch the data and start analysing -
data = pd.read_csv(pm.utils.get_data('radon.csv'))
u = np.log(data.Uppm).unique()
mn_counties = data.county.unique()
floor = data.floor.values.astype(np.int32)
counties = len(mn_counties)
county_lookup = dict(zip(mn_counties, range(counties)))
county_idx = data['county_code'].values.astype(np.int32)
data.head()
Conventional approaches¶
Before comparing ADVI approximations on hierarchical models, lets model radon exposure by conventional approaches -
Complete pooling:¶
Treat all counties the same, and estimate a single radon level. $$ y_i = \alpha + \beta x_i + \epsilon_i $$ where $y_i$ is the logarithm of radon level in house $i$, $x_i$ is the floor of measurement (either basement or first floor) and $\epsilon_i$ are the errors representing measurement error, temporal within-house variation, or variation among houses. The model directly translates to PyMC4 as -
@pm.model
def pooled_model():
a = yield pm.Normal('a', loc=0.0, scale=10.0, batch_stack=2)
loc = a[0] + a[1]*floor
scale = yield pm.Exponential("sigma", rate=1.0)
y = yield pm.Normal('y', loc=loc, scale=scale, observed=data.log_radon.values)
Before running the model let’s do some prior predictive checks. These help in incorporating scientific knowledge into our model.
prior_checks = pm.sample_prior_predictive(pooled_model())
prior_checks
To make our lives easier during plotting and diagonsing while using ArviZ, we define a function remove_scope
for renaming all variables in InferenceData to their actual distribution name.
def remove_scope(idata):
for group in idata._groups:
for var in getattr(idata, group).variables:
if "/" in var:
idata.rename(name_dict={var: var.split("/")[-1]}, inplace=True)
idata.rename(name_dict={"y_dim_0": "obs_id"}, inplace=True)
remove_scope(prior_checks)
prior_checks
_, ax = plt.subplots()
prior_checks.assign_coords(coords={"a_dim_0": ["Basement", " First Floor"]}, inplace=True)
prior_checks.prior_predictive.plot.scatter(x="a_dim_0", y="a", color="k", alpha=0.2, ax=ax)
ax.set(xlabel="Level", ylabel="Radon level (Log Scale)");
As there is no coords
and dims
integration to PyMC4's ModelTemplate, we need a bit extra manipulations to handle them. Here we need to assign_coords to dimensions of variable a
to consider Basement
and First Floor
.
Before seeing the data, these priors seem to allow for quite a wide range of the mean log radon level. Let's fire up Variational Inference machinery and fit the model -
pooled_advi = pm.fit(pooled_model(), num_steps=25_000)
def plot_elbo(loss):
plt.plot(loss)
plt.yscale("log")
plt.xlabel("Number of iterations")
plt.ylabel("Negative log(ELBO)")
plot_elbo(pooled_advi.losses)
Looks good, ELBO seems to have converged. As a sanity check, we will plot ELBO each time after fitting a new model to figure out its convergence.
Now, we'll draw samples from the posterior distribution. And then, pass these samples to sample_posterior_predictive
to estimate the uncertainty at Basement and First Floor radon levels.
pooled_advi_samples = pooled_advi.approximation.sample(2_000)
pooled_advi_samples
posterior_predictive = pm.sample_posterior_predictive(pooled_model(), pooled_advi_samples)
remove_scope(posterior_predictive)
posterior_predictive
We now want to calculate the highest density interval given by the posterior predictive on Radon levels. However, we are not interested in the HDI of each observation but in the HDI of each level (either Basement or First Floor). We first group posterior_predictive samples using coords
and then pass the specific dimensions ("chain", "draw", "obs_id") to az.hdi
.
floor = xr.DataArray(floor, dims=("obs_id"))
hdi_helper = lambda ds: az.hdi(ds, input_core_dims=[["chain", "draw", "obs_id"]])
hdi_ppc = posterior_predictive.posterior_predictive["y"].groupby(floor).apply(hdi_helper)["y"]
hdi_ppc
In addition, ArviZ has also included the hdi_prob as an attribute of the hdi coordinate, click on its file icon to see it!
We will now add one extra coordinate to the observed_data group: the Level labels (not indices). This will allow xarray to automatically generate the correct xlabel and xticklabels so we don’t have to worry about labeling too much. In this particular case we will only do one plot, which makes the adding of a coordinate a bit of an overkill. In many cases however, we will have several plots and using this approach will automate labeling for all plots. Eventually, we will sort by Level coordinate to make sure Basement is the first value and goes at the left of the plot.
posterior_predictive.rename(name_dict={"a_dim_0": "Level"}, inplace=True)
posterior_predictive.assign_coords({"Level": ["Basement", "First Floor"]}, inplace=True)
level_labels = posterior_predictive.posterior.Level[floor]
posterior_predictive.observed_data = posterior_predictive.observed_data.assign_coords(Level=level_labels).sortby("Level")
Plot the point estimates of the slope and intercept for the complete pooling model.
xvals = xr.DataArray([0, 1], dims="Level", coords={"Level": ["Basement", "First Floor"]})
posterior_predictive.posterior["a"] = posterior_predictive.posterior.a[:, :, 0] + posterior_predictive.posterior.a[:, :, 1] * xvals
pooled_means = posterior_predictive.posterior.mean(dim=("chain", "draw"))
_, ax = plt.subplots()
posterior_predictive.observed_data.plot.scatter(x="Level", y="y", label="Observations", alpha=0.4, ax=ax)
az.plot_hdi(
[0, 1], hdi_data=hdi_ppc, fill_kwargs={"alpha": 0.2, "label": "Exp. distrib. of Radon levels"}, ax=ax
)
az.plot_hdi(
[0, 1], posterior_predictive.posterior.a, fill_kwargs={"alpha": 0.5, "label": "Exp. mean HPD"}, ax=ax
)
ax.plot([0, 1], pooled_means.a, label="Exp. mean")
ax.set_ylabel("Log radon level")
ax.legend(ncol=2, fontsize=9, frameon=True);
The 94% interval of the expected value is very narrow, and even narrower for basement measurements, meaning that the model is slightly more confident about these observations. The sampling distribution of individual radon levels is much wider. We can infer that floor level does account for some of the variation in radon levels. We can see however that the model underestimates the dispersion in radon levels across households – lots of them lie outside the light orange prediction envelope. Also, the error rates are high representing high bias. So this model is a good start but we can’t stop there.
No pooling:¶
Here we do not pool the estimates of the intercepts but completely pool the slope estimates assuming the variance is same within each county. $$ y_i = \alpha_{j[i]} + \beta x_i + \epsilon_i $$ where $j$ = 1, ..., 85 representing each county.
@pm.model
def unpooled_model():
a_county = yield pm.Normal('a_county', loc=0., scale=10., batch_stack=counties)
beta = yield pm.Normal('beta', loc=0, scale=10.)
loc = tf.gather(a_county, county_idx) + beta*floor
scale = yield pm.Exponential("sigma", rate=1.)
y = yield pm.Normal('y', loc=loc, scale=scale, observed=data.log_radon.values)
unpooled_advi = pm.fit(unpooled_model(), num_steps=25_000)
plot_elbo(unpooled_advi.losses)
unpooled_advi_samples = unpooled_advi.approximation.sample(2_000)
remove_scope(unpooled_advi_samples)
unpooled_advi_samples
Let’s plot each county's expected values with 94% confidence interval.
unpooled_advi_samples.assign_coords(coords={"a_county_dim_0": mn_counties}, inplace=True)
az.plot_forest(
unpooled_advi_samples, var_names="a_county", figsize=(6, 16), combined=True, textsize=8
);