18 Truncation: How does Stan deal with truncation?

Suppose we observed \(y = (1, \dots, 9)\),15

These observations are drawn from a population distributed normal with unknown mean (\(\mu\)) and variance (\(\sigma^2\)), with the constraint that \(y < 10\), \[ \begin{aligned}[t] y_i &\sim \mathsf{Normal}(\mu, \sigma^2) I(-\infty, 10) . \end{aligned} \]

With the censoring taken into account, the log likelihood is \[ \log L(y; \mu, \sigma) = \sum_{i = 1}^n \left( \log \phi(y_i; \mu, \sigma^2) - \log\Phi(y_i; \mu, \sigma^2) \right) \] where \(\phi\) is the normal distribution PDF, and \(\Phi\) is the normal distribution $

The posterior of this model is not well identified by the data, so the mean, \(\mu\), and scale, \(\sigma\), are given informative priors based on the data, \[ \begin{aligned}[t] \mu &\sim \mathsf{Normal}(\bar{y}, s_y) ,\\ \sigma &\sim \mathsf{HalfCauchy}(0, s_y) . \end{aligned} \] where \(\bar{y}\) is the mean of \(y\), and \(s_y\) is the standard deviation of \(y\). Alternatively, we could have standardized the data prior to estimation.

18.1 Stan Model

See Stan Development Team (2016), Chapter 11 “Truncated or Censored Data” for more on how Stan handles truncation and censoring. In Stan the T operator used in sampling statement,

y ~ distribution(...) T[upper, lower];

is used to adjust the log-posterior contribution for truncation.

  data {
  int N;
  vector[N] y;
  real U;
  real mu_mean;
  real mu_scale;
  real sigma_scale;
}
parameters {
  real mu;
  real sigma;
}
model {
  mu ~ normal(mu_mean, mu_scale);
  sigma ~ cauchy(0., sigma_scale);
  for (i in 1:N) {
    y[i] ~ normal(mu, sigma) T[, U];
  }
}

18.2 Estimation

We can compare these results to that of a model in which the truncation is not taken into account: \[ \begin{aligned}[t] y_i &\sim \mathsf{Normal}(\mu, \sigma^2), \\ \mu &\sim \mathsf{Normal}(\bar{y}, s_y) ,\\ \sigma &\sim \mathsf{HalfCauchy}(0, s_y) . \end{aligned} \]

  data {
  int N;
  vector[N] y;
  real mu_mean;
  real mu_scale;
  real sigma_scale;
}
parameters {
  real mu;
  real sigma;
}
model {
  mu ~ normal(mu_mean, mu_scale);
  sigma ~ cauchy(0., sigma_scale);
  y ~ normal(mu, sigma);
}

We can compare the densities for \(\mu\) and \(\sigma\) in each of these models.

Posterior density of $\mu$ when estimated with and without truncation

(#fig:truncate_plot_density_mu)Posterior density of \(\mu\) when estimated with and without truncation

Posterior density of $\sigma$ when estimated with and without truncation

(#fig:truncate_plot_density_sigma)Posterior density of \(\sigma\) when estimated with and without truncation

18.3 Questions

  1. How are the densities of \(\mu\) and \(\sigma\) different under the two models? Why are they different?
  2. Re-estimate the model with improper uniform priors for \(\mu\) and \(\sigma\). How do the posterior distributions change?
  3. Suppose that the truncation points are unknown. Write a Stan model and estimate. See Stan Development Team (2016), Section 11.2 “Unknown Truncation Points” for how to write such a model. How important is the prior you place on the truncation points?

References

Stan Development Team. 2016. Stan Modeling Language Users Guide and Reference Manual, Version 2.14.0. https://github.com/stan-dev/stan/releases/download/v2.14.0/stan-reference-2.14.0.pdf.


  1. This example is derived from Simon Jackman. “Truncation: How does WinBUGS deal with truncation?” BUGS Examples, 2007-07-24, URL.