beanz: Bayesian Analysis of Heterogeneous Treatment Effect

Chenguang Wang and Ravi Varadhan

2018-11-05

## Loading required package: beanz
## Loading required package: Rcpp

Introduction

In patient-centered outcomes research, it is vital to assess the heterogeneity of treatment effects (HTE) when making health care decisions for an individual patient or a group of patients. Nevertheless, it remains challenging to evaluate HTE based on information collected from clinical studies that are often designed and conducted to evaluate the efficacy of a treatment for the overall population. The Bayesian framework offers a principled and flexible approach to estimate and compare treatment effects across subgroups of patients defined by their characteristics.

R package beanz provides functions to facilitate the conduct of Bayesian analysis of HTE and a web-based graphical user interface for users to conduct such Bayesian analysis in an interactive and user-friendly manner.

Data accepted by beanz

There are two types of data structures that beanz recognizes:

The beanz package provides dataset solvd.sub from the SOLVD trial as an example Patient level raw data dataset.

Estimate subgroup effect

If Patient level raw data is provided, the package provides function bzGetSubgrpRaw for estimating subgroup effect for each subgroup. The return value from bzGetSubgrpRaw is a data frame with the format of Summary treatment effect data.

The example is as follows:

var.cov    <- c("lvef", "sodium", "any.vasodilator.use");
var.resp   <- "y";
var.trt    <- "trt";
var.censor <- "censor";
resptype   <- "survival";

subgrp.effect <- bzGetSubgrpRaw(solvd.sub,
                                  var.resp   = var.resp,
                                  var.trt    = var.trt,
                                  var.cov    = var.cov,
                                  var.censor = var.censor,
                                  resptype   = resptype);
print(subgrp.effect);
##   Subgroup lvef sodium any.vasodilator.use    Estimate   Variance   N
## 1        1    0      0                   0 -0.37783038 0.01212786 562
## 2        2    0      0                   1 -0.34655336 0.01004499 695
## 3        3    0      1                   0 -0.79235451 0.03939983 237
## 4        4    0      1                   1 -0.39334304 0.02969421 250
## 5        5    1      0                   0  0.06776454 0.04629163 223
## 6        6    1      0                   1 -0.23655764 0.02400353 341
## 7        7    1      1                   0  0.15435495 0.10365396 104
## 8        8    1      1                   1  0.05947290 0.07761840 123

Bayesian HTE models

The function bzCallStan calls rstan::sampling to draw samples for different Bayesian models. The following models are available in the current version of beanz:

The following examples show how No subgroup effect model (nse), Simple regression model* (sr) and Basic shrinkage model (bs) are called:

var.estvar <- c("Estimate", "Variance");

rst.nse <- bzCallStan("nse", dat.sub=subgrp.effect,
                     var.estvar = var.estvar, var.cov = var.cov,
                     par.pri = c(B=1000),
                     chains=4, iter=4000,
                     warmup=2000, seed=1000, cores=1);
## 
## SAMPLING FOR MODEL 'nse' NOW (CHAIN 1).
## Chain 1: Gradient evaluation took 4.6e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.46 seconds.
## Chain 1: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 1: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 1: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 1: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 1: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 1: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 1: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 1: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 1: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 1: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 1: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 1: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 1:  Elapsed Time: 0.206829 seconds (Warm-up)
## Chain 1:                0.107388 seconds (Sampling)
## Chain 1:                0.314217 seconds (Total)
## 
## SAMPLING FOR MODEL 'nse' NOW (CHAIN 2).
## Chain 2: Gradient evaluation took 1e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 2: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 2: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 2: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 2: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 2: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 2: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 2: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 2: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 2: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 2: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 2: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 2: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 2:  Elapsed Time: 0.192115 seconds (Warm-up)
## Chain 2:                0.103506 seconds (Sampling)
## Chain 2:                0.295621 seconds (Total)
## 
## SAMPLING FOR MODEL 'nse' NOW (CHAIN 3).
## Chain 3: Gradient evaluation took 1.1e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 3: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 3: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 3: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 3: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 3: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 3: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 3: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 3: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 3: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 3: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 3: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 3: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 3:  Elapsed Time: 0.208957 seconds (Warm-up)
## Chain 3:                0.105046 seconds (Sampling)
## Chain 3:                0.314003 seconds (Total)
## 
## SAMPLING FOR MODEL 'nse' NOW (CHAIN 4).
## Chain 4: Gradient evaluation took 1e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 4: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 4: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 4: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 4: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 4: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 4: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 4: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 4: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 4: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 4: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 4: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 4: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 4:  Elapsed Time: 0.204275 seconds (Warm-up)
## Chain 4:                0.10409 seconds (Sampling)
## Chain 4:                0.308365 seconds (Total)
## Warning: Relative effective sample sizes ('r_eff' argument) not specified.
## For models fit with MCMC, the reported PSIS effective sample sizes and 
## MCSE estimates will be over-optimistic.
rst.sr  <- bzCallStan("sr", dat.sub=subgrp.effect,
                     var.estvar = var.estvar, var.cov = var.cov,
                     par.pri = c(B=1000, C=1000),
                     chains=4, iter=4000,
                     warmup=2000,  seed=1000, cores=1);
## 
## SAMPLING FOR MODEL 'sr' NOW (CHAIN 1).
## Chain 1: Gradient evaluation took 3.4e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.34 seconds.
## Chain 1: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 1: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 1: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 1: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 1: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 1: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 1: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 1: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 1: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 1: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 1: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 1: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 1:  Elapsed Time: 0.259736 seconds (Warm-up)
## Chain 1:                0.263625 seconds (Sampling)
## Chain 1:                0.523361 seconds (Total)
## 
## SAMPLING FOR MODEL 'sr' NOW (CHAIN 2).
## Chain 2: Gradient evaluation took 1.3e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 2: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 2: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 2: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 2: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 2: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 2: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 2: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 2: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 2: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 2: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 2: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 2: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 2:  Elapsed Time: 0.269518 seconds (Warm-up)
## Chain 2:                0.226486 seconds (Sampling)
## Chain 2:                0.496004 seconds (Total)
## 
## SAMPLING FOR MODEL 'sr' NOW (CHAIN 3).
## Chain 3: Gradient evaluation took 1.3e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 3: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 3: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 3: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 3: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 3: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 3: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 3: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 3: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 3: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 3: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 3: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 3: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 3:  Elapsed Time: 0.274157 seconds (Warm-up)
## Chain 3:                0.329506 seconds (Sampling)
## Chain 3:                0.603663 seconds (Total)
## 
## SAMPLING FOR MODEL 'sr' NOW (CHAIN 4).
## Chain 4: Gradient evaluation took 1.4e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
## Chain 4: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 4: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 4: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 4: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 4: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 4: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 4: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 4: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 4: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 4: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 4: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 4: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 4:  Elapsed Time: 0.255888 seconds (Warm-up)
## Chain 4:                0.243952 seconds (Sampling)
## Chain 4:                0.49984 seconds (Total)
## Warning: Relative effective sample sizes ('r_eff' argument) not specified.
## For models fit with MCMC, the reported PSIS effective sample sizes and 
## MCSE estimates will be over-optimistic.
## Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
rst.bs  <- bzCallStan("bs", dat.sub=subgrp.effect,
                     var.estvar = var.estvar, var.cov = var.cov,
                     par.pri = c(B=1000, D=1),
                     chains=4, iter=4000, warmup=2000,  seed=1000, cores=1);
## 
## SAMPLING FOR MODEL 'bs' NOW (CHAIN 1).
## Chain 1: Gradient evaluation took 2.2e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds.
## Chain 1: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 1: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 1: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 1: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 1: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 1: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 1: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 1: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 1: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 1: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 1: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 1: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 1:  Elapsed Time: 0.394887 seconds (Warm-up)
## Chain 1:                0.330107 seconds (Sampling)
## Chain 1:                0.724994 seconds (Total)
## 
## SAMPLING FOR MODEL 'bs' NOW (CHAIN 2).
## Chain 2: Gradient evaluation took 1.2e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 2: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 2: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 2: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 2: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 2: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 2: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 2: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 2: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 2: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 2: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 2: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 2: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 2:  Elapsed Time: 0.368321 seconds (Warm-up)
## Chain 2:                0.269086 seconds (Sampling)
## Chain 2:                0.637407 seconds (Total)
## 
## SAMPLING FOR MODEL 'bs' NOW (CHAIN 3).
## Chain 3: Gradient evaluation took 1.2e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 3: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 3: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 3: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 3: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 3: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 3: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 3: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 3: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 3: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 3: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 3: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 3: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 3:  Elapsed Time: 0.375229 seconds (Warm-up)
## Chain 3:                0.271274 seconds (Sampling)
## Chain 3:                0.646503 seconds (Total)
## 
## SAMPLING FOR MODEL 'bs' NOW (CHAIN 4).
## Chain 4: Gradient evaluation took 1.4e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
## Chain 4: Iteration:    1 / 4000 [  0%]  (Warmup)
## Chain 4: Iteration:  400 / 4000 [ 10%]  (Warmup)
## Chain 4: Iteration:  800 / 4000 [ 20%]  (Warmup)
## Chain 4: Iteration: 1200 / 4000 [ 30%]  (Warmup)
## Chain 4: Iteration: 1600 / 4000 [ 40%]  (Warmup)
## Chain 4: Iteration: 2000 / 4000 [ 50%]  (Warmup)
## Chain 4: Iteration: 2001 / 4000 [ 50%]  (Sampling)
## Chain 4: Iteration: 2400 / 4000 [ 60%]  (Sampling)
## Chain 4: Iteration: 2800 / 4000 [ 70%]  (Sampling)
## Chain 4: Iteration: 3200 / 4000 [ 80%]  (Sampling)
## Chain 4: Iteration: 3600 / 4000 [ 90%]  (Sampling)
## Chain 4: Iteration: 4000 / 4000 [100%]  (Sampling)
## Chain 4:  Elapsed Time: 0.371145 seconds (Warm-up)
## Chain 4:                0.381979 seconds (Sampling)
## Chain 4:                0.753124 seconds (Total)
## Warning: There were 5 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Relative effective sample sizes ('r_eff' argument) not specified.
## For models fit with MCMC, the reported PSIS effective sample sizes and 
## MCSE estimates will be over-optimistic.
## Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.

Results presentation

Posterior subgroup treatment effect summary

Posterior subgroup treatment effect can be summarized and presented by functions bzSummary, bzPlot and bzForest. These functions allows to include a subgroup from another model (i.e. No subgroup effect model) as a reference in the results.

Simple regression model

sel.grps <- c(1,4,5);
tbl.sub <- bzSummary(rst.sr, ref.stan.rst=rst.nse, ref.sel.grps=1);
print(tbl.sub);
##                Subgroup   Mean    SD   Q025    Q25 Median    Q75   Q975
## 1            Subgroup 1 -0.401 0.095 -0.586 -0.464 -0.402 -0.336 -0.215
## 2            Subgroup 2 -0.381 0.088 -0.556  -0.44  -0.38 -0.322 -0.208
## 3            Subgroup 3 -0.485  0.13 -0.739 -0.575 -0.485 -0.396 -0.238
## 4            Subgroup 4 -0.465 0.124 -0.709  -0.55 -0.466 -0.382 -0.222
## 5            Subgroup 5 -0.062 0.135 -0.327 -0.151 -0.063   0.03  0.199
## 6            Subgroup 6 -0.042 0.119 -0.277 -0.124 -0.042  0.036  0.193
## 7            Subgroup 7 -0.147 0.161 -0.467 -0.256 -0.145 -0.037  0.166
## 8            Subgroup 8 -0.127 0.148 -0.408 -0.229 -0.128 -0.029  0.163
## 9 No subgroup effect(1) -0.322 0.057 -0.432 -0.361 -0.322 -0.284  -0.21
##   ProbLT0
## 1       1
## 2       1
## 3       1
## 4       1
## 5   0.678
## 6   0.638
## 7   0.821
## 8   0.808
## 9       1
bzPlot(rst.sr, sel.grps = sel.grps, ref.stan.rst=rst.nse, ref.sel.grps=1);

bzForest(rst.sr, sel.grps = sel.grps, ref.stan.rst=rst.nse, ref.sel.grps=1);

Basic shrinkage model

tbl.sub <- bzSummary(rst.bs, ref.stan.rst=rst.nse, ref.sel.grps=1);
print(tbl.sub);
##                Subgroup   Mean    SD   Q025    Q25 Median    Q75   Q975
## 1            Subgroup 1 -0.352 0.096 -0.547 -0.413 -0.349 -0.287 -0.167
## 2            Subgroup 2 -0.333 0.086 -0.505  -0.39 -0.333 -0.276 -0.164
## 3            Subgroup 3 -0.518 0.185 -0.923 -0.641 -0.494 -0.375 -0.233
## 4            Subgroup 4 -0.345  0.13 -0.614 -0.423 -0.341 -0.266 -0.087
## 5            Subgroup 5 -0.141 0.183 -0.431 -0.281 -0.166 -0.022  0.262
## 6            Subgroup 6 -0.267 0.122 -0.492 -0.348 -0.275  -0.19 -0.006
## 7            Subgroup 7 -0.167 0.215 -0.505 -0.318 -0.207 -0.043  0.325
## 8            Subgroup 8  -0.18 0.191 -0.489 -0.315 -0.209 -0.067  0.259
## 9 No subgroup effect(1) -0.322 0.057 -0.432 -0.361 -0.322 -0.284  -0.21
##   ProbLT0
## 1       1
## 2       1
## 3       1
## 4   0.994
## 5    0.78
## 6   0.978
## 7   0.793
## 8   0.832
## 9       1
bzPlot(rst.bs, sel.grps = sel.grps, ref.stan.rst=rst.nse, ref.sel.grps=1);

bzForest(rst.bs, sel.grps = sel.grps, ref.stan.rst=rst.nse, ref.sel.grps=1);

Posterior subgroup treatment effect comparison

Posterior subgroup treatment effect can be compared between subgroups by functions bzSummaryComp, bzPlotComp and bzForestComp.

Simple regression model

tbl.sub <- bzSummaryComp(rst.sr, sel.grps=sel.grps);
print(tbl.sub);
##     Comparison   Mean    SD   Q025    Q25 Median   Q75  Q975 ProbLT0
## 1 Subgroup 4-1 -0.065 0.155 -0.376 -0.166 -0.065  0.04 0.235   0.661
## 2 Subgroup 5-1  0.338 0.164  0.017  0.229  0.339 0.449 0.652   0.021
## 3 Subgroup 5-4  0.404 0.182  0.043  0.281  0.407 0.528 0.754   0.015
bzPlot(rst.sr, sel.grps = sel.grps);

bzForest(rst.sr, sel.grps = sel.grps);

Basic shrinkage model

tbl.sub <- bzSummaryComp(rst.bs, sel.grps=sel.grps);
print(tbl.sub);
##     Comparison  Mean    SD   Q025   Q25 Median   Q75  Q975 ProbLT0
## 1 Subgroup 4-1 0.008 0.164 -0.317  -0.1  0.008 0.114 0.337    0.48
## 2 Subgroup 5-1 0.211 0.205 -0.143 0.065  0.191 0.345 0.651   0.146
## 3 Subgroup 5-4 0.205 0.225 -0.195  0.05   0.19 0.348  0.68   0.176
bzPlotComp(rst.bs, sel.grps = sel.grps);

bzForestComp(rst.bs, sel.grps = sel.grps);

Overall summary

beanz provides function bzRptTbl to generate the summary posterior subgroup treatment effect table from the model selected by DIC (i.e. the model with the smallest DIC):

lst.rst     <- list(nse=rst.nse, sr=rst.sr, bs=rst.bs);
tbl.summary <- bzRptTbl(lst.rst, dat.sub = subgrp.effect, var.cov = var.cov);
## Warning: Accessing looic using '$' is deprecated and will be removed in
## a future release. Please extract the looic estimate from the 'estimates'
## component instead.

## Warning: Accessing looic using '$' is deprecated and will be removed in
## a future release. Please extract the looic estimate from the 'estimates'
## component instead.

## Warning: Accessing looic using '$' is deprecated and will be removed in
## a future release. Please extract the looic estimate from the 'estimates'
## component instead.
print(tbl.summary);
##                         Model Subgroup lvef sodium any.vasodilator.use
## Subgroup 1 No subgroup effect        1    0      0                   0
## Subgroup 2 No subgroup effect        2    0      0                   1
## Subgroup 3 No subgroup effect        3    0      1                   0
## Subgroup 4 No subgroup effect        4    0      1                   1
## Subgroup 5 No subgroup effect        5    1      0                   0
## Subgroup 6 No subgroup effect        6    1      0                   1
## Subgroup 7 No subgroup effect        7    1      1                   0
## Subgroup 8 No subgroup effect        8    1      1                   1
##              Mean    SD Prob < 0
## Subgroup 1 -0.322 0.057        1
## Subgroup 2 -0.322 0.057        1
## Subgroup 3 -0.322 0.057        1
## Subgroup 4 -0.322 0.057        1
## Subgroup 5 -0.322 0.057        1
## Subgroup 6 -0.322 0.057        1
## Subgroup 7 -0.322 0.057        1
## Subgroup 8 -0.322 0.057        1

Predictive distribution

Function bzPredSubgrp generates the predictive distribution of the subgrooup treatment effects.

pred.dist <- bzPredSubgrp(rst.sr,
                                  dat.sub=subgrp.effect,
                                  var.estvar = var.estvar);
head(pred.dist);
##            [,1]        [,2]        [,3]       [,4]       [,5]        [,6]
## [1,] -0.2443403 -0.37026131 -0.48782712 -0.4175711 -0.1818395  0.06148608
## [2,] -0.4006665 -0.34646948 -0.82407855 -0.5298116 -0.5172765 -0.12202406
## [3,] -0.3531079 -0.40878203 -0.98156824 -0.5717369  0.1569483  0.06265959
## [4,] -0.1748258 -0.09883424 -0.06711163 -0.5393822 -0.2630068  0.28170231
## [5,] -0.3573387 -0.48245604 -0.47417926 -0.6468966  0.4829688  0.09943262
## [6,] -0.1552856 -0.45475839 -0.25495726 -0.4290654 -0.1337221 -0.24410314
##             [,7]        [,8]
## [1,] -0.20048760 -0.52368185
## [2,]  0.05190809 -0.69426722
## [3,] -0.33705538  0.04613654
## [4,]  0.19244676 -0.52837679
## [5,]  0.10791837  0.07565519
## [6,] -0.02151492 -0.87574559

Graphical User Interface

With package shiny installed, beaz provides a web-based graphical user interface (GUI) for conducting the HTE analysis in an user-friendly interactive manner. The GUI can be started by

bzShiny();

Toolbox

Package beanz provides function bzGailSimon that implements the Gail-Simon test for qualitative interactions:

gs.pval <- bzGailSimon(subgrp.effect$Estimate,
                       sqrt(subgrp.effect$Variance));
print(gs.pval);
## [1] 0.9191656

The result show that there is no significant qualitative interactions according to the Gail-Simon test.