Performs a one-stage pairwise or network meta-analysis while addressing aggregate binary or continuous missing participant outcome data via the pattern-mixture model.
Usage
run_model(
data,
measure,
model,
assumption,
heter_prior,
mean_misspar,
var_misspar,
D,
ref,
base_risk,
n_chains,
n_iter,
n_burnin,
n_thin,
inits = NULL,
adjust_wgt = NULL
)
Format
The columns of the data-frame in the argument data
refer
to the following elements for a continuous outcome:
t | An intervention identifier in each arm. |
y | The observed mean value of the outcome in each arm. |
sd | The observed standard deviation of the outcome in each arm. |
m | The number of missing participant outcome data in each arm. |
n | The number of randomised participants in each arm. |
For a binary outcome, the columns of the data-frame in the argument
data
refer to the following elements:
t | An intervention identifier in each arm. |
r | The observed number of events of the outcome in each arm. |
m | The number of missing participant outcome data in each arm. |
n | The number of randomised participants in each arm. |
The number of rows in data
equals the number of collected trials.
Each element appears in data
as many times as the maximum number of
interventions compared in a trial of the dataset.
In pairwise meta-analysis, the maximum number of arms is inherently two.
The same holds for a network meta-analysis without multi-arm trials.
In the case of network meta-analysis with multi-arm trials, the maximum
number of arms exceeds two. See 'Examples' that illustrates the structure
of data
for a network with a maximum number of four arms.
It is not a prerequisite of run_model
that the multi-arm trials
appear at the bottom of the dataset.
Arguments
- data
A data-frame of the one-trial-per-row format with arm-level data. See 'Format' for the specification of the columns.
- measure
Character string indicating the effect measure. For a binary outcome, the following can be considered:
"OR"
,"RR"
or"RD"
for the odds ratio, relative risk, and risk difference, respectively. For a continuous outcome, the following can be considered:"MD"
,"SMD"
, or"ROM"
for mean difference, standardised mean difference and ratio of means, respectively.- model
Character string indicating the analysis model with values
"RE"
, or"FE"
for the random-effects and fixed-effect model, respectively. The default argument is"RE"
.- assumption
Character string indicating the structure of the informative missingness parameter. Set
assumption
equal to one of the following:"HIE-COMMON"
,"HIE-TRIAL"
,"HIE-ARM"
,"IDE-COMMON"
,"IDE-TRIAL"
,"IDE-ARM"
,"IND-CORR"
, or"IND-UNCORR"
. The default argument is"IDE-ARM"
. The abbreviations"IDE"
,"HIE"
, and"IND"
stand for identical, hierarchical and independent, respectively."CORR"
and"UNCORR"
stand for correlated and uncorrelated, respectively.- heter_prior
A list of three elements with the following order: 1) a character string indicating the distribution with (currently available) values
"halfnormal"
,"uniform"
,"lognormal"
, or"logt"
; 2) two numeric values that refer to the parameters of the selected distribution. For"lognormal"
, and"logt"
these numbers refer to the mean and precision, respectively. For"halfnormal"
, these numbers refer to zero and the scale parameter (equal to 4 or 1 being the corresponding precision of the scale parameter 0.5 or 1). For"uniform"
, these numbers refer to the minimum and maximum value of the distribution. See 'Details' inheterogeneity_param_prior
.- mean_misspar
A scalar or numeric vector of two numeric values for the mean of the normal distribution of the informative missingness parameter (see 'Details'). The default argument is 0 and corresponds to the missing-at-random assumption. See also 'Details' in
missingness_param_prior
.- var_misspar
A positive non-zero number for the variance of the normal distribution of the informative missingness parameter. When the
measure
is"OR"
,"MD"
, or"SMD"
the default argument is 1. When themeasure
is"ROM"
the default argument is 0.04.- D
A binary number for the direction of the outcome. Set
D = 1
for beneficial outcome andD = 0
for harmful outcome.- ref
An integer specifying the reference intervention. The number should match the intervention identifier under element t in
data
(See 'Format').- base_risk
A scalar, a vector of length three with elements sorted in ascending order, or a matrix with two columns and number of rows equal to the number of relevant trials. In the case of a scalar or vector, the elements should be in the interval (0, 1) (see 'Details'). If
base_risk
has not been defined, the function uses the median event risk for the reference intervention from the corresponding trials indata
. This argument is only relevant for a binary outcome.- n_chains
Positive integer specifying the number of chains for the MCMC sampling; an argument of the
jags
function of the R-package R2jags. The default argument is 2.- n_iter
Positive integer specifying the number of Markov chains for the MCMC sampling; an argument of the
jags
function of the R-package R2jags. The default argument is 10000.- n_burnin
Positive integer specifying the number of iterations to discard at the beginning of the MCMC sampling; an argument of the
jags
function of the R-package R2jags. The default argument is 1000.- n_thin
Positive integer specifying the thinning rate for the MCMC sampling; an argument of the
jags
function of the R-package R2jags. The default argument is 1.- inits
A list with the initial values for the parameters; an argument of the
jags
function of the R-package R2jags. The default argument isNULL
, and JAGS generates the initial values.- adjust_wgt
A positive numeric vector with length equal to the number of studies in the network, or a positive numeric matrix with two columns and number of rows equal to the number of studies in the network. The elements comprise study-specific weights. This argument is optional. See 'Details'.
Value
A list of R2jags output on the summaries of the posterior distribution, and the Gelman-Rubin convergence diagnostic (Gelman et al., 1992) of the following monitored parameters for a fixed-effect pairwise meta-analysis:
- EM
The estimated summary effect measure (according to the argument
measure
).- EM_LOR
The estimated summary odd ratio in the logarithmic scale when
measure = "RR"
ormeasure = "RD"
.- dev_o
The deviance contribution of each trial-arm based on the observed outcome.
- hat_par
The fitted outcome at each trial-arm.
- phi
The informative missingness parameter.
For a fixed-effect network meta-analysis, the output additionally includes:
- SUCRA
The surface under the cumulative ranking curve for each intervention.
- SUCRA_LOR
The surface under the cumulative ranking curve for each intervention under the odds ratio effect measure when
measure = "RR"
ormeasure = "RD"
.- effectiveneness
The ranking probability of each intervention for every rank.
For a random-effects pairwise meta-analysis, the output additionally includes the following elements:
- EM_pred
The predicted summary effect measure (according to the argument
measure
).- EM_pred_LOR
The predicted summary odds ratio in the logarithmic scale when
measure = "RR"
ormeasure = "RD"
.- delta
The estimated trial-specific effect measure (according to the argument
measure
).- tau
The between-trial standard deviation.
In network meta-analysis, EM
and EM_pred
refer to all
possible pairwise comparisons of interventions in the network. Furthermore,
tau
is typically assumed to be common for all observed comparisons
in the network. For a multi-arm trial, we estimate a total of T-1
delta
for comparisons with the baseline intervention of the trial
(found in the first column of the element t), with T being
the number of interventions in the trial.
Furthermore, the output includes the following elements:
- leverage_o
The leverage for the observed outcome at each trial-arm.
- sign_dev_o
The sign of the difference between observed and fitted outcome at each trial-arm.
- model_assessment
A data-frame on the measures of model assessment: deviance information criterion, number of effective parameters, and total residual deviance.
- indic
The sign of basic parameters in relation to the reference intervention as specified in argument
reg
- jagsfit
An object of S3 class
jags
with the posterior results on all monitored parameters to be used in themcmc_diagnostics
function.
The run_model
function also returns the arguments data
,
measure
, model
, assumption
, heter_prior
,
mean_misspar
, var_misspar
, D
, ref
,
base_risk
, n_chains
, n_iter
, n_burnin
,
and n_thin
as specified by the user to be inherited by other
functions of the package.
Details
The model runs in JAGS
and the progress of the simulation
appears on the R console. The output of run_model
is used as an S3
object by other functions of the package to be processed further and
provide an end-user-ready output.
The data_preparation
function is called to prepare the data
for the Bayesian analysis. data_preparation
creates the
pseudo-data-frames m_new
, and I
, that have the same
dimensions with the element N
. m_new
takes the zero
value for the observed trial-arms with unreported missing participant
outcome data (i.e., m
equals NA
for the corresponding
trial-arms), the same value with m
for the observed trial-arms with
reported missing participant outcome data, and NA
for the unobserved
trial-arms. I
is a dummy pseudo-data-frame and takes the value one
for the observed trial-arms with reported missing participant outcome data,
the zero value for the observed trial-arms with unreported missing
participant outcome data (i.e., m_new
equals zero for the
corresponding trial-arms), and NA
for the unobserved trial-arms.
Thus, I
indicates whether missing participant outcome data have been
collected for the observed trial-arms. If the user has not defined the
element m in data
, m_new
and I
take the zero
value for all observed trial-arms to indicate that no missing participant
outcome data have been collected for the analysed outcome. See 'Details' in
data_preparation
.
Furthermore, data_preparation
sorts the interventions across
the arms of each trial in an ascending order and correspondingly the
remaining elements in data
(see 'Format').
data_preparation
considers the first column in t as
being the control arm for every trial. Thus, this sorting ensures that
interventions with a lower identifier are consistently treated as the
control arm in each trial. This case is relevant in non-star-shaped
networks.
The model is updated until convergence using the
autojags
function of the R-package
R2jags with 2 updates and
number of iterations and thinning equal to n_iter
and n_thin
,
respectively.
To perform a Bayesian pairwise or network meta-analysis, the
prepare_model
function is called which contains the WinBUGS
code as written by Dias et al. (2013a) for binomial and normal likelihood to
analyse aggregate binary and continuous outcome data, respectively.
prepare_model
uses the consistency model (as described in
Lu and Ades (2006)) to estimate all possible comparisons in the network.
It also accounts for the multi-arm trials by assigning conditional
univariate normal distributions on the underlying trial-specific effect
size of comparisons with the baseline arm of the multi-arm trial
(Dias et al., 2013a).
The code of Dias et al. (2013a) has been extended to incorporate the
pattern-mixture model to adjust the underlying outcome in each arm of
every trial for missing participant outcome data (Spineli et al., 2021;
Spineli, 2019a; Turner et al., 2015). The assumptions about the
missingness parameter are specified using the arguments mean_misspar
and var_misspar
. Specifically, run_model
considers the
informative missingness odds ratio in the logarithmic scale for binary
outcome data (Spineli, 2019a; Turner et al., 2015; White et al., 2008), the
informative missingness difference of means when measure
is
"MD"
or "SMD"
, and the informative missingness ratio of means
in the logarithmic scale when measure
is "ROM"
(Spineli et al., 2021; Mavridis et al., 2015).
When assumption
is trial-specific (i.e., "IDE-TRIAL"
or
"HIE-TRIAL"
), or independent (i.e., "IND-CORR"
or
"IND-UNCORR"
), only one numeric value can be assigned to
mean_misspar
because the same missingness scenario is applied to all
trials and trial-arms of the dataset, respectively. When assumption
is "IDE-ARM"
or "HIE-ARM"
, a maximum of two
different or identical numeric values can be assigned as a
vector to mean_misspars
: the first value refers to the experimental
arm, and the second value refers to the control arm of a trial.
In the case of a network, the first value is considered for all
non-reference interventions and the second value is considered for the
reference intervention of the network (i.e., the intervention with
identifier equal to ref
). This is necessary to ensure transitivity
in the assumptions for the missingness parameter across the network
(Spineli, 2019b).
When there is at least one trial-arm with unreported missing participant
outcome data (i.e., m
equals NA
for the corresponding
trial-arms) or when missing participant outcome data have not been
collected for the analysed outcome (i.e., m
is missing in
data
), run_model
assigns the assumption "IND-UNCORR"
to assumption
.
Currently, there are no empirically-based prior distributions for the
informative missingness parameters. The user may refer to Spineli (2019),
Turner et al. (2015), Mavridis et al. (2015), and White et al. (2008) to
determine mean_misspar
and select a proper value for
var_misspar
.
The scalar base_risk
refers to a fixed baseline risk for the
selected reference intervention (as specified with ref
).
When base_risk
is a three-element vector, it refers to a random
baseline risk and the elements should be sorted in ascending order as they
refer to the lower bound, mean value, and upper bound of the 95%
confidence interval for the baseline risk for the selected reference
intervention. The baseline_model
function is called to
calculate the mean and variance of the approximately normal distribution of
the logit of an event for ref
using these three elements
(Dias et al., 2018). When base_risk
is a matrix, it refers to the
predicted baseline risk with first column being the number of events, and
second column being the sample size of the corresponding trials on the
selected reference intervention. Then the baseline_model
function is called that contains the WinBUGS code as written by Dias et al.
(2013b) for the hierarchical baseline model. The posterior mean and
precision of the predictive distribution of the logit of an event
for the selected reference intervention are plugged in the WinBUGS code for
the relative effects model (via the prepare_model
function).
The matrix base_risk
should not comprise the trials in data
that include the ref
, unless justified (Dias et al., 2018).
To obtain unique absolute risks for each intervention, the network
meta-analysis model has been extended to incorporate the transitive risks
framework, namely, an intervention has the same absolute risk regardless of
the comparator intervention(s) in a trial (Spineli et al., 2017).
The absolute risks are a function of the odds ratio (the base-case
effect measure for a binary outcome) and the selected baseline risk for the
reference intervention (ref
) (Appendix in Dias et al., 2013a).
We advocate using the odds ratio as an effect measure for its desired
mathematical properties. Then, the relative risk and risk difference can be
obtained as a function of the absolute risks of the corresponding
interventions in the comparison of interest. Hence, regardless of the
selected measure
for a binary outcome, run_model
performs
pairwise or network meta-analysis based on the odds ratio.
When adjust_wgt
is defined, run_model
gives less weight to
studies with smaller values, and more weight to studies with larger values.
Specifically, the model weight the (contribution of the) studies by
inflating the between-study variance of the underlying treatment effects of
the studies (Proctor et al., 2022). This approach is only relevant for a
random-effect model (model = "RE"
). When adjust_wgt
is
specified as a matrix, the columns pertain to the bounds of the uniform
distribution. Then, for each study, prepare_model
samples the
weights from the corresponding uniform distribution. This is similar to the
enrichment-through-weighting approach implemented by Proctor et al. (2022).
References
Cooper NJ, Sutton AJ, Morris D, Ades AE, Welton NJ. Addressing between-study heterogeneity and inconsistency in mixed treatment comparisons: Application to stroke prevention treatments in individuals with non-rheumatic atrial fibrillation. Stat Med 2009;28(14):1861–81. doi: 10.1002/sim.3594
Dias S, Ades AE, Welton NJ, Jansen JP, Sutton AJ. Network Meta-Analysis for Decision Making. Chichester (UK): Wiley; 2018.
Dias S, Sutton AJ, Ades AE, Welton NJ. Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Decis Making 2013a;33(5):607–17. doi: 10.1177/0272989X12458724
Dias S, Welton NJ, Sutton AJ, Ades AE. Evidence synthesis for decision making 5: the baseline natural history model. Med Decis Making 2013b;33(5):657–70. doi: 10.1177/0272989X13485155
Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences. Stat Sci 1992;7(4):457–72. doi: 10.1214/ss/1177011136
Lu G, Ades AE. Assessing evidence inconsistency in mixed treatment comparisons. J Am Stat Assoc 2006;101:447–59. doi: 10.1198/016214505000001302
Mavridis D, White IR, Higgins JP, Cipriani A, Salanti G. Allowing for uncertainty due to missing continuous outcome data in pairwise and network meta-analysis. Stat Med 2015;34(5):721–41. doi: 10.1002/sim.6365
Proctor T, Zimmermann S, Seide S, Kieser M. A comparison of methods for enriching network meta-analyses in the absence of individual patient data. Res Synth Methods 2022;13(6):745–759. doi: 10.1002/jrsm.1568.
Spineli LM, Kalyvas C, Papadimitropoulou K. Continuous(ly) missing outcome data in network meta-analysis: a one-stage pattern-mixture model approach. Stat Methods Med Res 2021;30(4):958–75. doi: 10.1177/0962280220983544
Spineli LM. An empirical comparison of Bayesian modelling strategies for missing binary outcome data in network meta-analysis. BMC Med Res Methodol 2019a;19(1):86. doi: 10.1186/s12874-019-0731-y
Spineli LM. Modeling missing binary outcome data while preserving transitivity assumption yielded more credible network meta-analysis results. J Clin Epidemiol 2019b;105:19–26. doi: 10.1016/j.jclinepi.2018.09.002
Spineli LM, Brignardello-Petersen R, Heen AF, Achille F, Brandt L, Guyatt GH, et al. Obtaining absolute effect estimates to facilitate shared decision making in the context of multiple-treatment comparisons. Abstracts of the Global Evidence Summit, Cape Town, South Africa. Cochrane Database of Systematic Reviews 2017;9(Suppl 1):18911.
Turner NL, Dias S, Ades AE, Welton NJ. A Bayesian framework to account for uncertainty due to missing binary outcome data in pairwise meta-analysis. Stat Med 2015;34(12):2062–80. doi: 10.1002/sim.6475
White IR, Higgins JP, Wood AM. Allowing for uncertainty due to missing data in meta-analysis–part 1: two-stage methods. Stat Med 2008;27(5):711–27. doi: 10.1002/sim.3008
Examples
data("nma.baker2009")
# Show the first six trials of the dataset
head(nma.baker2009)
#> study t1 t2 t3 t4 r1 r2 r3 r4 m1 m2 m3 m4 n1 n2 n3 n4
#> 1 Llewellyn-Jones, 1996 1 4 NA NA 3 0 NA NA 1 0 NA NA 8 8 NA NA
#> 2 Paggiaro, 1998 1 4 NA NA 51 45 NA NA 27 19 NA NA 139 142 NA NA
#> 3 Mahler, 1999 1 7 NA NA 47 28 NA NA 23 9 NA NA 143 135 NA NA
#> 4 Casaburi, 2000 1 8 NA NA 41 45 NA NA 18 12 NA NA 191 279 NA NA
#> 5 van Noord, 2000 1 7 NA NA 18 11 NA NA 8 7 NA NA 50 47 NA NA
#> 6 Rennard, 2001 1 7 NA NA 41 38 NA NA 29 22 NA NA 135 132 NA NA
# \donttest{
# Perform a random-effects network meta-analysis
# Note: Ideally, set 'n_iter' to 10000 and 'n_burnin' to 1000
run_model(data = nma.baker2009,
measure = "OR",
model = "RE",
assumption = "IDE-ARM",
heter_prior = list("halfnormal", 0, 1),
mean_misspar = c(0, 0),
var_misspar = 1,
D = 0,
ref = 1,
n_chains = 3,
n_iter = 1000,
n_burnin = 100,
n_thin = 1)
#> JAGS generates initial values for the parameters.
#> Running the model ...
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 100
#> Unobserved stochastic nodes: 148
#> Total graph size: 2620
#>
#> Initializing model
#>
#> ... Updating the model until convergence
#> $EM
#> mean sd 2.5% 25% 50% 75%
#> EM[2,1] -0.95204059 0.4794402 -1.86268263 -1.298319343 -0.94838067 -0.58364102
#> EM[3,1] -0.70931522 0.4432734 -1.61397667 -0.995171505 -0.71582090 -0.39240989
#> EM[4,1] -0.24712422 0.2720997 -0.72439335 -0.449340749 -0.24240817 -0.08132679
#> EM[5,1] -0.38521170 0.2961804 -0.90787612 -0.595388052 -0.41237292 -0.18420376
#> EM[6,1] -0.09852908 0.2536017 -0.61687827 -0.265312886 -0.08908474 0.08187280
#> EM[7,1] -0.46762454 0.1712873 -0.79919485 -0.591485198 -0.47203258 -0.33620053
#> EM[8,1] -0.48879494 0.1727219 -0.79301058 -0.607023257 -0.50651275 -0.39001735
#> EM[3,2] 0.24272536 0.5075921 -0.76069501 -0.073299780 0.22538107 0.58518474
#> EM[4,2] 0.70491636 0.5117273 -0.19581232 0.319529107 0.67467810 1.06008959
#> EM[5,2] 0.56682888 0.5120178 -0.36173899 0.183556700 0.57683863 0.91478543
#> EM[6,2] 0.85351151 0.4618588 0.01417983 0.521429187 0.82092062 1.16855816
#> EM[7,2] 0.48441605 0.4397853 -0.33229082 0.178172674 0.45614983 0.77231967
#> EM[8,2] 0.46324564 0.4432027 -0.38805643 0.157745517 0.44534200 0.76515308
#> EM[4,3] 0.46219100 0.4815025 -0.50046763 0.153207670 0.44845338 0.76049662
#> EM[5,3] 0.32410352 0.4522569 -0.51886563 -0.008591434 0.31278609 0.65441408
#> EM[6,3] 0.61078614 0.4452511 -0.28019938 0.334734935 0.59822205 0.90658836
#> EM[7,3] 0.24169069 0.4380255 -0.63661076 -0.040984207 0.23521300 0.54213897
#> EM[8,3] 0.22052028 0.4394324 -0.65433248 -0.049085123 0.22066397 0.50315897
#> EM[5,4] -0.13808748 0.3843198 -0.91724696 -0.370387616 -0.15189979 0.11166141
#> EM[6,4] 0.14859514 0.3392953 -0.54656791 -0.066875464 0.17350140 0.36978350
#> EM[7,4] -0.22050031 0.2828593 -0.81890156 -0.397501581 -0.19707001 -0.04951913
#> EM[8,4] -0.24167072 0.2873253 -0.86703212 -0.428354671 -0.19871853 -0.01191049
#> EM[6,5] 0.28668262 0.3670489 -0.42879303 0.026126908 0.30468874 0.55558935
#> EM[7,5] -0.08241283 0.3009944 -0.78868703 -0.271965550 -0.04861505 0.13445575
#> EM[8,5] -0.10358324 0.3223878 -0.79634167 -0.330827550 -0.08102303 0.15740611
#> EM[7,6] -0.36909546 0.2661312 -0.89493168 -0.539397644 -0.37952884 -0.18341981
#> EM[8,6] -0.39026586 0.2569948 -0.92434139 -0.562250938 -0.36837912 -0.22314585
#> EM[8,7] -0.02117041 0.1805161 -0.38701265 -0.147704616 -0.01047825 0.11263858
#> 97.5% Rhat n.eff
#> EM[2,1] -0.05773006 1.043450 51
#> EM[3,1] 0.12506095 1.034442 67
#> EM[4,1] 0.33966243 1.099859 30
#> EM[5,1] 0.22649483 1.008371 430
#> EM[6,1] 0.35867727 1.109187 23
#> EM[7,1] -0.16334827 1.041462 62
#> EM[8,1] -0.09526632 1.192267 17
#> EM[3,2] 1.22485452 1.003592 650
#> EM[4,2] 1.74651663 1.004772 470
#> EM[5,2] 1.56185541 1.028603 75
#> EM[6,2] 1.78626870 1.030219 70
#> EM[7,2] 1.37877319 1.024095 92
#> EM[8,2] 1.33085361 1.042044 56
#> EM[4,3] 1.45776138 1.011869 3000
#> EM[5,3] 1.15069585 1.024310 110
#> EM[6,3] 1.48886384 1.013543 160
#> EM[7,3] 1.10223912 1.014876 180
#> EM[8,3] 1.10058586 1.018937 120
#> EM[5,4] 0.57348391 1.043534 97
#> EM[6,4] 0.79608022 1.034285 110
#> EM[7,4] 0.34518394 1.032838 96
#> EM[8,4] 0.19691008 1.042715 63
#> EM[6,5] 0.95615881 1.025735 87
#> EM[7,5] 0.44594686 1.005913 400
#> EM[8,5] 0.40664317 1.019023 120
#> EM[7,6] 0.13739640 1.056560 42
#> EM[8,6] 0.09487756 1.014979 430
#> EM[8,7] 0.29551718 1.089213 27
#>
#> $dev_o
#> mean sd 2.5% 25% 50% 75%
#> dev.o[1,1] 2.3071803 2.1933933 0.0073567540 0.61179928 1.6530164 3.3683965
#> dev.o[2,1] 0.9079757 1.3118594 0.0007120113 0.08563851 0.4070833 1.2354509
#> dev.o[3,1] 1.0839551 1.4407935 0.0012125099 0.11703586 0.5582722 1.4755065
#> dev.o[4,1] 0.7090118 1.0290274 0.0004398211 0.07683608 0.3140766 0.9339732
#> dev.o[5,1] 0.6831468 1.0064211 0.0007563215 0.06818154 0.3071991 0.8623092
#> dev.o[6,1] 0.9762501 1.3266782 0.0011329538 0.10685918 0.4471693 1.3494932
#> dev.o[7,1] 0.7514604 1.1056381 0.0007320714 0.07975755 0.3340751 0.9963726
#> dev.o[8,1] 0.6935655 0.9593776 0.0006833240 0.06629988 0.3122286 0.9322861
#> dev.o[9,1] 0.8251130 1.1003219 0.0009207312 0.08982647 0.3959668 1.1330745
#> dev.o[10,1] 0.6048649 0.8292364 0.0005580268 0.06672803 0.2917977 0.8091903
#> dev.o[11,1] 0.9290830 1.2705690 0.0011233435 0.09435495 0.4357913 1.2198596
#> dev.o[12,1] 1.2110815 1.4465765 0.0020073997 0.16405302 0.6885742 1.7802372
#> dev.o[13,1] 1.2057704 1.5635631 0.0015367227 0.13534012 0.6102129 1.6610500
#> dev.o[14,1] 0.9112328 1.2453573 0.0010522737 0.09839312 0.4244318 1.2376709
#> dev.o[15,1] 1.0229530 1.4125146 0.0014979551 0.10846138 0.4860898 1.3733180
#> dev.o[16,1] 1.2182700 1.6519888 0.0012534494 0.13107383 0.5611781 1.7171586
#> dev.o[17,1] 2.1296344 2.2864530 0.0040926144 0.39400492 1.4164747 3.1398023
#> dev.o[18,1] 1.1975034 1.5444925 0.0015516081 0.14238018 0.6177234 1.6181070
#> dev.o[19,1] 1.6882705 1.8258474 0.0048631820 0.33344766 1.1022939 2.4720228
#> dev.o[20,1] 0.7096011 1.0402504 0.0005855375 0.06993335 0.3045210 0.9214136
#> dev.o[21,1] 1.0818207 1.3966292 0.0014758240 0.13128089 0.5445654 1.5012565
#> dev.o[1,2] 3.3350277 1.9702121 0.6974765052 1.88606525 2.9328227 4.3556750
#> dev.o[2,2] 0.8669448 1.2145900 0.0007352499 0.09500829 0.4139336 1.1427352
#> dev.o[3,2] 1.0694359 1.3839576 0.0014985238 0.13732538 0.5724687 1.4510729
#> dev.o[4,2] 0.7593066 1.0848895 0.0009571201 0.08167565 0.3438982 0.9779041
#> dev.o[5,2] 0.6124559 0.8746811 0.0006546363 0.05779869 0.2685631 0.7947191
#> dev.o[6,2] 0.9994327 1.2725817 0.0009721294 0.11395641 0.5195427 1.3757372
#> dev.o[7,2] 0.8726066 1.2528443 0.0006569659 0.08410909 0.3680264 1.1370170
#> dev.o[8,2] 0.7151418 0.9968180 0.0006593573 0.07127684 0.3322177 0.9661331
#> dev.o[9,2] 0.6929576 1.0048032 0.0007904313 0.06662233 0.3019471 0.9056621
#> dev.o[10,2] 1.5600720 1.9264592 0.0016510991 0.20708719 0.8760467 2.2575471
#> dev.o[11,2] 0.9639600 1.3624433 0.0009393399 0.10249989 0.4462106 1.2743870
#> dev.o[12,2] 0.7930674 1.1122372 0.0006754685 0.07374094 0.3590571 1.0403367
#> dev.o[13,2] 0.9742233 1.4296577 0.0008964583 0.09182103 0.4176806 1.3000702
#> dev.o[14,2] 0.8399442 1.1311753 0.0005944725 0.07659741 0.3931893 1.1569887
#> dev.o[15,2] 1.0629351 1.4070454 0.0011235786 0.11839209 0.5158458 1.4589415
#> dev.o[16,2] 1.3031174 1.7680395 0.0009028689 0.12434073 0.5940880 1.7693648
#> dev.o[17,2] 2.3367117 2.0363286 0.0201587775 0.77957057 1.8405794 3.3588399
#> dev.o[18,2] 1.1189644 1.3427531 0.0011272206 0.15870105 0.6384796 1.6279943
#> dev.o[19,2] 0.4962324 0.7294454 0.0002928365 0.05172238 0.2288764 0.6394235
#> dev.o[20,2] 0.6974186 0.9656040 0.0007682187 0.06892373 0.3178203 0.9226550
#> dev.o[21,2] 0.9847043 1.2442284 0.0012025326 0.13155982 0.5207149 1.3733596
#> dev.o[9,3] 0.8989572 1.1714149 0.0009518012 0.10147169 0.4488655 1.2450611
#> dev.o[10,3] 0.6998207 0.9578957 0.0010575342 0.07535629 0.3146118 0.9161630
#> dev.o[12,3] 1.1206649 1.5034391 0.0013201022 0.12231166 0.5502682 1.5552434
#> dev.o[13,3] 1.0346507 1.4832597 0.0007788104 0.10674241 0.4751515 1.3425657
#> dev.o[19,3] 1.2199684 1.0987213 0.0135774101 0.39131910 0.9122099 1.7432208
#> dev.o[10,4] 1.1324147 1.4738738 0.0011403772 0.13093443 0.5778444 1.5690025
#> dev.o[12,4] 0.8233828 1.0772451 0.0007722888 0.09626880 0.4061329 1.1072325
#> dev.o[13,4] 1.1359306 1.5404937 0.0013679131 0.12143631 0.5543152 1.5181500
#> 97.5% Rhat n.eff
#> dev.o[1,1] 8.122587 1.000811 3000
#> dev.o[2,1] 4.432680 1.004100 560
#> dev.o[3,1] 5.155303 1.000553 3000
#> dev.o[4,1] 3.771648 1.005417 800
#> dev.o[5,1] 3.463636 1.000557 3000
#> dev.o[6,1] 4.786749 1.002755 2600
#> dev.o[7,1] 3.730670 1.005075 510
#> dev.o[8,1] 3.376928 1.002274 3000
#> dev.o[9,1] 3.943331 1.002153 1200
#> dev.o[10,1] 3.098952 1.001611 1900
#> dev.o[11,1] 4.714587 1.001027 3000
#> dev.o[12,1] 5.202680 1.002541 2400
#> dev.o[13,1] 5.578894 1.001946 1400
#> dev.o[14,1] 4.618663 1.001234 3000
#> dev.o[15,1] 5.064572 1.006883 320
#> dev.o[16,1] 5.851498 1.003055 1800
#> dev.o[17,1] 8.100762 1.000851 3000
#> dev.o[18,1] 5.395922 1.002757 1600
#> dev.o[19,1] 6.554515 1.006741 1100
#> dev.o[20,1] 3.762901 1.004555 1100
#> dev.o[21,1] 4.987512 1.001095 3000
#> dev.o[1,2] 8.124931 1.002399 1100
#> dev.o[2,2] 4.417844 1.002878 840
#> dev.o[3,2] 4.947724 1.006695 760
#> dev.o[4,2] 3.700189 1.002128 1200
#> dev.o[5,2] 3.210201 1.000837 3000
#> dev.o[6,2] 4.578936 1.000502 3000
#> dev.o[7,2] 4.361856 1.001787 1600
#> dev.o[8,2] 3.567278 1.006177 470
#> dev.o[9,2] 3.568487 1.001812 2100
#> dev.o[10,2] 6.628965 1.002274 1100
#> dev.o[11,2] 4.730719 1.001577 2300
#> dev.o[12,2] 3.827105 1.000991 3000
#> dev.o[13,2] 4.950386 1.000548 3000
#> dev.o[14,2] 4.164878 1.001549 2900
#> dev.o[15,2] 4.971398 1.004991 450
#> dev.o[16,2] 6.458039 1.001779 1600
#> dev.o[17,2] 7.549323 1.007690 3000
#> dev.o[18,2] 4.809164 1.006646 330
#> dev.o[19,2] 2.534383 1.000872 3000
#> dev.o[20,2] 3.596330 1.001976 1800
#> dev.o[21,2] 4.303797 1.001789 1600
#> dev.o[9,3] 4.248155 1.009025 240
#> dev.o[10,3] 3.498728 1.000717 3000
#> dev.o[12,3] 5.325256 1.001507 3000
#> dev.o[13,3] 5.407988 1.000907 3000
#> dev.o[19,3] 4.020421 1.002852 850
#> dev.o[10,4] 5.178270 1.001057 3000
#> dev.o[12,4] 3.828119 1.004632 540
#> dev.o[13,4] 5.514376 1.004396 520
#>
#> $hat_par
#> mean sd 2.5% 25% 50%
#> hat.par[1,1] 1.569227 0.7582175 0.3971556 0.9956319 1.483746
#> hat.par[2,1] 49.214918 4.6318594 40.2080361 46.0531174 49.302493
#> hat.par[3,1] 44.039860 4.5586434 35.3642212 40.8063050 43.822675
#> hat.par[4,1] 42.101649 4.6481508 33.1653216 38.9147669 41.974998
#> hat.par[5,1] 17.199010 2.4630157 12.4262454 15.5385570 17.132913
#> hat.par[6,1] 43.849731 4.1355486 35.7999508 41.0727761 43.779041
#> hat.par[7,1] 157.037615 7.3250085 142.8364789 152.0707807 157.095861
#> hat.par[8,1] 68.381123 5.3871372 57.8902341 64.6538148 68.397409
#> hat.par[9,1] 89.392634 5.1708455 79.3739713 85.8898893 89.378519
#> hat.par[10,1] 78.515601 3.7362193 71.0355424 75.9692464 78.607775
#> hat.par[11,1] 73.119283 5.7615321 61.9642225 69.1832292 73.027219
#> hat.par[12,1] 77.285710 4.1653201 68.7137733 74.6188665 77.334391
#> hat.par[13,1] 49.637304 4.7126545 40.7425908 46.4213905 49.553988
#> hat.par[14,1] 33.992407 4.6995119 25.2554269 30.6643146 33.756036
#> hat.par[15,1] 36.856981 4.7511394 27.6559059 33.6237018 36.805866
#> hat.par[16,1] 303.904584 13.0076948 279.1312729 294.6176151 303.792515
#> hat.par[17,1] 10.545778 2.4856932 6.2487877 8.7561146 10.355889
#> hat.par[18,1] 21.512362 3.2911746 15.5491805 19.3059103 21.360055
#> hat.par[19,1] 4.018424 1.3872414 1.7609993 3.0044707 3.863965
#> hat.par[20,1] 25.282681 3.5277359 18.3874257 22.9284851 25.172600
#> hat.par[21,1] 32.088245 4.2742511 24.3769390 29.1416319 31.848090
#> hat.par[1,2] 1.458258 0.7500639 0.3412464 0.8895718 1.339863
#> hat.par[2,2] 46.708109 4.7213612 37.6606960 43.4296639 46.713771
#> hat.par[3,2] 31.219260 3.9238734 23.6856780 28.5906597 31.191710
#> hat.par[4,2] 43.893274 5.0925631 34.4131878 40.4477382 43.633016
#> hat.par[5,2] 11.962787 2.0793399 8.1464936 10.5142656 11.832455
#> hat.par[6,2] 35.118704 3.8713810 27.9215314 32.4682541 35.033015
#> hat.par[7,2] 196.703982 9.9885830 177.5774113 190.2844347 196.625797
#> hat.par[8,2] 51.510687 5.0792071 42.0432968 48.0154545 51.434286
#> hat.par[9,2] 81.140892 5.6200235 69.9829460 77.3890093 81.099288
#> hat.par[10,2] 72.990830 3.9255573 65.1411359 70.3315425 73.114700
#> hat.par[11,2] 119.322130 8.2892628 103.4750676 113.6919638 119.368621
#> hat.par[12,2] 80.451585 4.7977663 70.9569054 77.2800177 80.445409
#> hat.par[13,2] 26.069279 4.4859301 17.6640011 23.0436333 25.883888
#> hat.par[14,2] 32.037486 4.5390885 23.3007303 28.9762599 31.835780
#> hat.par[15,2] 32.201763 4.3883520 24.3014246 29.1557523 31.970160
#> hat.par[16,2] 247.087113 12.7194475 222.2364651 238.5762026 247.028630
#> hat.par[17,2] 7.324783 1.9350027 3.9512247 5.9453248 7.179774
#> hat.par[18,2] 13.387299 2.4985008 8.6749858 11.6042003 13.347981
#> hat.par[19,2] 2.658991 1.0198108 1.1267914 1.9029012 2.536545
#> hat.par[20,2] 18.736293 2.9783134 13.2867808 16.6519191 18.610579
#> hat.par[21,2] 21.741242 3.4753794 14.8737162 19.4429205 21.670450
#> hat.par[9,3] 79.961518 5.8628009 68.1530193 76.0602411 80.104634
#> hat.par[10,3] 68.801155 4.2344402 60.5006478 65.9265403 68.870473
#> hat.par[12,3] 67.555619 4.7850638 58.1151361 64.2760986 67.633091
#> hat.par[13,3] 35.014956 5.2396509 25.2560244 31.3867656 34.807907
#> hat.par[19,3] 2.388314 0.8757448 1.0424908 1.7490433 2.255402
#> hat.par[10,4] 66.635280 4.2499539 57.9441519 63.8049961 66.644777
#> hat.par[12,4] 62.545012 4.3061344 54.4654690 59.6584376 62.421287
#> hat.par[13,4] 41.286114 4.7590275 32.1827152 38.0941396 41.220040
#> 75% 97.5% Rhat n.eff
#> hat.par[1,1] 2.037852 3.260411 1.001666 1700
#> hat.par[2,1] 52.299546 58.251566 1.007607 320
#> hat.par[3,1] 47.151779 53.287930 1.007667 380
#> hat.par[4,1] 45.121449 51.590723 1.018158 120
#> hat.par[5,1] 18.808658 22.132856 1.006068 360
#> hat.par[6,1] 46.623253 52.184982 1.004224 650
#> hat.par[7,1] 162.083751 171.152275 1.004737 700
#> hat.par[8,1] 71.996041 78.995873 1.003284 720
#> hat.par[9,1] 92.870610 99.582368 1.007559 350
#> hat.par[10,1] 81.115362 85.347145 1.002054 1300
#> hat.par[11,1] 76.899477 84.807986 1.024962 89
#> hat.par[12,1] 80.192045 85.239690 1.012986 170
#> hat.par[13,1] 52.762544 59.225049 1.001262 2600
#> hat.par[14,1] 37.057302 43.606043 1.013855 150
#> hat.par[15,1] 39.923050 46.416051 1.007540 420
#> hat.par[16,1] 312.959058 329.536426 1.002106 1200
#> hat.par[17,1] 12.144564 15.830995 1.001324 3000
#> hat.par[18,1] 23.657319 28.326699 1.001910 1700
#> hat.par[19,1] 4.839353 7.190632 1.001521 2000
#> hat.par[20,1] 27.537116 32.467120 1.000828 3000
#> hat.par[21,1] 34.833862 40.673813 1.008080 320
#> hat.par[1,2] 1.906566 3.185494 1.002433 1000
#> hat.par[2,2] 49.883258 56.238698 1.009086 250
#> hat.par[3,2] 33.750701 39.204148 1.005707 480
#> hat.par[4,2] 47.317451 54.142228 1.008459 250
#> hat.par[5,2] 13.351716 16.361752 1.007945 290
#> hat.par[6,2] 37.603870 43.135371 1.002477 1000
#> hat.par[7,2] 203.230363 216.543342 1.003424 690
#> hat.par[8,2] 54.959653 61.593978 1.001791 1600
#> hat.par[9,2] 84.823256 92.277772 1.020455 100
#> hat.par[10,2] 75.733759 80.398072 1.003124 760
#> hat.par[11,2] 124.974060 135.381909 1.006300 350
#> hat.par[12,2] 83.769829 89.588474 1.000707 3000
#> hat.par[13,2] 28.959511 35.180349 1.009907 240
#> hat.par[14,2] 35.080253 41.327520 1.007215 300
#> hat.par[15,2] 34.993202 41.365077 1.007002 370
#> hat.par[16,2] 255.352644 273.442851 1.004707 480
#> hat.par[17,2] 8.551436 11.537933 1.003324 1300
#> hat.par[18,2] 15.081678 18.422396 1.008739 240
#> hat.par[19,2] 3.241607 5.027312 1.001501 2000
#> hat.par[20,2] 20.680167 24.969858 1.001159 3000
#> hat.par[21,2] 24.023516 28.670054 1.017667 130
#> hat.par[9,3] 83.958378 91.181472 1.023633 92
#> hat.par[10,3] 71.758228 76.971699 1.000913 3000
#> hat.par[12,3] 70.834894 77.108794 1.004922 450
#> hat.par[13,3] 38.469537 45.393709 1.000607 3000
#> hat.par[19,3] 2.905699 4.383472 1.003206 740
#> hat.par[10,4] 69.506133 74.847723 1.000677 3000
#> hat.par[12,4] 65.384052 71.122321 1.002973 900
#> hat.par[13,4] 44.361295 50.989344 1.007458 290
#>
#> $leverage_o
#> [1] 0.8653020 0.7927569 0.7724190 0.6706851 0.6202568 0.6581885 0.7367482
#> [8] 0.6901257 0.6575889 0.5948410 0.8405791 0.6024611 0.8068950 0.7787674
#> [15] 0.7380944 0.8865771 0.8928257 0.7876757 0.7686947 0.6242977 0.7594946
#> [22] 0.2388784 0.7657430 0.6173093 0.7261342 0.5001127 0.6569333 0.8582310
#> [29] 0.7086570 0.6772783 0.7168270 0.8933283 0.7277189 0.9739954 0.6920177
#> [36] 0.7749683 0.9461156 0.3508033 0.5425888 0.3097707 0.5978827 0.5769333
#> [43] 0.7148173 0.6760204 0.7371787 1.0346424 0.1501427 0.6637422 0.6218316
#> [50] 0.7555956
#>
#> $sign_dev_o
#> [1] 1 1 1 -1 1 -1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 -1 1 -1 -1 -1 1
#> [26] -1 1 1 1 1 1 -1 -1 -1 -1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1
#>
#> $phi
#> mean sd 2.5% 25% 50% 75%
#> phi[1] -0.31188488 0.5672879 -1.5153805 -0.6709594 -0.257198433 0.1051096
#> phi[2] 0.03233370 0.9514805 -1.8878946 -0.6146299 0.058410555 0.7094599
#> phi[3] -0.01284126 0.9848809 -2.0103916 -0.6693043 0.007126058 0.6613259
#> phi[4] -1.04465367 0.8505398 -2.5518031 -1.6313232 -1.110311514 -0.4989336
#> phi[5] -0.30224230 0.9487218 -1.9647441 -0.9524710 -0.376211797 0.2876648
#> phi[6] 0.76525637 0.7890515 -0.9417339 0.3027762 0.802628797 1.2633528
#> phi[7] -0.36232757 0.6824686 -1.6529574 -0.8152265 -0.376463284 0.0853005
#> phi[8] -0.06595861 1.0347412 -1.9951224 -0.7950951 -0.083820701 0.6579151
#> 97.5% Rhat n.eff
#> phi[1] 0.6674523 1.144704 19
#> phi[2] 1.7773956 1.005550 400
#> phi[3] 1.8234925 1.001722 1600
#> phi[4] 0.7297468 1.028166 190
#> phi[5] 1.7462488 1.020474 110
#> phi[6] 2.2851028 1.020293 110
#> phi[7] 1.0110384 1.031469 81
#> phi[8] 1.9763803 1.044953 68
#>
#> $model_assessment
#> DIC pD dev n_data
#> 1 88.71967 34.75147 53.96819 50
#>
#> $data
#> study t1 t2 t3 t4 r1 r2 r3 r4 m1 m2 m3 m4 n1 n2 n3 n4
#> 1 Llewellyn-Jones, 1996 1 4 NA NA 3 0 NA NA 1 0 NA NA 8 8 NA NA
#> 2 Paggiaro, 1998 1 4 NA NA 51 45 NA NA 27 19 NA NA 139 142 NA NA
#> 3 Mahler, 1999 1 7 NA NA 47 28 NA NA 23 9 NA NA 143 135 NA NA
#> 4 Casaburi, 2000 1 8 NA NA 41 45 NA NA 18 12 NA NA 191 279 NA NA
#> 5 van Noord, 2000 1 7 NA NA 18 11 NA NA 8 7 NA NA 50 47 NA NA
#> 6 Rennard, 2001 1 7 NA NA 41 38 NA NA 29 22 NA NA 135 132 NA NA
#> 7 Casaburi, 2002 1 8 NA NA 156 198 NA NA 77 66 NA NA 371 550 NA NA
#> 8 Chapman, 2002 1 7 NA NA 68 52 NA NA 28 20 NA NA 207 201 NA NA
#> 9 Donohue, 2002 1 7 8 NA 92 82 77 NA 37 20 10 NA 201 213 209 NA
#> 10 Mahler, 2002 1 4 7 5 79 77 63 68 69 68 45 52 181 168 160 165
#> 11 Rossi, 2002 1 6 NA NA 75 117 NA NA 59 92 NA NA 220 425 NA NA
#> 12 Hanania, 2003 1 4 7 5 73 79 65 71 59 49 57 53 185 183 177 178
#> 13 Szafranski, 2003 1 2 6 3 53 26 38 35 90 62 64 59 205 198 201 208
#> 14 Briggs, 2005 8 7 NA NA 30 36 NA NA 29 41 NA NA 328 325 NA NA
#> 15 Campbell, 2005 1 6 NA NA 34 35 NA NA 39 30 NA NA 217 215 NA NA
#> 16 Niewoehner, 2005 1 8 NA NA 296 255 NA NA 111 75 NA NA 915 914 NA NA
#> 17 van Noord, 2005 8 6 NA NA 4 14 NA NA 1 1 NA NA 70 69 NA NA
#> 18 Barnes, 2006 1 5 NA NA 24 11 NA NA 4 8 NA NA 73 67 NA NA
#> 19 O Donnell, 2006 1 7 5 NA 6 1 2 NA 5 1 3 NA 64 59 62 NA
#> 20 Baumgartner, 2007 1 7 NA NA 24 20 NA NA 32 26 NA NA 143 144 NA NA
#> 21 Freeman, 2007 1 8 NA NA 35 19 NA NA 33 18 NA NA 195 200 NA NA
#>
#> $measure
#> [1] "OR"
#>
#> $model
#> [1] "RE"
#>
#> $assumption
#> [1] "IDE-ARM"
#>
#> $mean_misspar
#> [1] 1e-04 1e-04
#>
#> $var_misspar
#> [1] 1
#>
#> $D
#> [1] 0
#>
#> $ref
#> [1] 1
#>
#> $indic
#> [,1] [,2] [,3] [,4]
#> [1,] 1 1 NA NA
#> [2,] 1 1 NA NA
#> [3,] 1 1 NA NA
#> [4,] 1 1 NA NA
#> [5,] 1 1 NA NA
#> [6,] 1 1 NA NA
#> [7,] 1 1 NA NA
#> [8,] 1 1 NA NA
#> [9,] 1 1 1 NA
#> [10,] 1 1 1 1
#> [11,] 1 1 NA NA
#> [12,] 1 1 1 1
#> [13,] 1 1 1 1
#> [14,] 1 1 NA NA
#> [15,] 1 1 NA NA
#> [16,] 1 1 NA NA
#> [17,] 1 1 NA NA
#> [18,] 1 1 NA NA
#> [19,] 1 1 1 NA
#> [20,] 1 1 NA NA
#> [21,] 1 1 NA NA
#>
#> $jagsfit
#> Inference for Bugs model at "5", fit using jags,
#> 3 chains, each with 1000 iterations (first 0 discarded)
#> n.sims = 3000 iterations saved
#> mu.vect sd.vect 2.5% 25% 50% 75% 97.5%
#> EM[2,1] -0.952 0.479 -1.863 -1.298 -0.948 -0.584 -0.058
#> EM[3,1] -0.709 0.443 -1.614 -0.995 -0.716 -0.392 0.125
#> EM[4,1] -0.247 0.272 -0.724 -0.449 -0.242 -0.081 0.340
#> EM[5,1] -0.385 0.296 -0.908 -0.595 -0.412 -0.184 0.226
#> EM[6,1] -0.099 0.254 -0.617 -0.265 -0.089 0.082 0.359
#> EM[7,1] -0.468 0.171 -0.799 -0.591 -0.472 -0.336 -0.163
#> EM[8,1] -0.489 0.173 -0.793 -0.607 -0.507 -0.390 -0.095
#> EM[3,2] 0.243 0.508 -0.761 -0.073 0.225 0.585 1.225
#> EM[4,2] 0.705 0.512 -0.196 0.320 0.675 1.060 1.747
#> EM[5,2] 0.567 0.512 -0.362 0.184 0.577 0.915 1.562
#> EM[6,2] 0.854 0.462 0.014 0.521 0.821 1.169 1.786
#> EM[7,2] 0.484 0.440 -0.332 0.178 0.456 0.772 1.379
#> EM[8,2] 0.463 0.443 -0.388 0.158 0.445 0.765 1.331
#> EM[4,3] 0.462 0.482 -0.500 0.153 0.448 0.760 1.458
#> EM[5,3] 0.324 0.452 -0.519 -0.009 0.313 0.654 1.151
#> EM[6,3] 0.611 0.445 -0.280 0.335 0.598 0.907 1.489
#> EM[7,3] 0.242 0.438 -0.637 -0.041 0.235 0.542 1.102
#> EM[8,3] 0.221 0.439 -0.654 -0.049 0.221 0.503 1.101
#> EM[5,4] -0.138 0.384 -0.917 -0.370 -0.152 0.112 0.573
#> EM[6,4] 0.149 0.339 -0.547 -0.067 0.174 0.370 0.796
#> EM[7,4] -0.221 0.283 -0.819 -0.398 -0.197 -0.050 0.345
#> EM[8,4] -0.242 0.287 -0.867 -0.428 -0.199 -0.012 0.197
#> EM[6,5] 0.287 0.367 -0.429 0.026 0.305 0.556 0.956
#> EM[7,5] -0.082 0.301 -0.789 -0.272 -0.049 0.134 0.446
#> EM[8,5] -0.104 0.322 -0.796 -0.331 -0.081 0.157 0.407
#> EM[7,6] -0.369 0.266 -0.895 -0.539 -0.380 -0.183 0.137
#> EM[8,6] -0.390 0.257 -0.924 -0.562 -0.368 -0.223 0.095
#> EM[8,7] -0.021 0.181 -0.387 -0.148 -0.010 0.113 0.296
#> EM.pred[2,1] -0.949 0.510 -1.948 -1.310 -0.938 -0.571 -0.018
#> EM.pred[3,1] -0.710 0.478 -1.661 -1.011 -0.714 -0.375 0.203
#> EM.pred[4,1] -0.247 0.325 -0.886 -0.455 -0.237 -0.065 0.447
#> EM.pred[5,1] -0.383 0.341 -1.052 -0.613 -0.397 -0.145 0.269
#> EM.pred[6,1] -0.097 0.310 -0.733 -0.295 -0.088 0.120 0.489
#> EM.pred[7,1] -0.469 0.251 -0.998 -0.624 -0.455 -0.310 0.009
#> EM.pred[8,1] -0.494 0.248 -1.013 -0.642 -0.487 -0.337 -0.052
#> EM.pred[3,2] 0.242 0.541 -0.821 -0.100 0.226 0.600 1.295
#> EM.pred[4,2] 0.705 0.551 -0.269 0.291 0.667 1.084 1.831
#> EM.pred[5,2] 0.571 0.537 -0.385 0.170 0.577 0.938 1.584
#> EM.pred[6,2] 0.853 0.501 -0.092 0.506 0.819 1.179 1.867
#> EM.pred[7,2] 0.485 0.469 -0.418 0.167 0.455 0.788 1.444
#> EM.pred[8,2] 0.469 0.477 -0.459 0.151 0.452 0.794 1.424
#> EM.pred[4,3] 0.465 0.508 -0.539 0.128 0.465 0.785 1.511
#> EM.pred[5,3] 0.324 0.490 -0.606 -0.031 0.324 0.678 1.239
#> EM.pred[6,3] 0.613 0.473 -0.334 0.318 0.597 0.920 1.546
#> EM.pred[7,3] 0.244 0.469 -0.703 -0.057 0.248 0.558 1.178
#> EM.pred[8,3] 0.225 0.473 -0.734 -0.064 0.228 0.528 1.152
#> EM.pred[5,4] -0.146 0.423 -1.034 -0.390 -0.166 0.131 0.650
#> EM.pred[6,4] 0.150 0.383 -0.624 -0.080 0.170 0.384 0.908
#> EM.pred[7,4] -0.223 0.340 -0.966 -0.425 -0.195 -0.011 0.437
#> EM.pred[8,4] -0.245 0.337 -0.984 -0.444 -0.200 0.000 0.312
#> EM.pred[6,5] 0.284 0.409 -0.506 0.007 0.302 0.568 1.073
#> EM.pred[7,5] -0.087 0.351 -0.859 -0.309 -0.053 0.150 0.547
#> EM.pred[8,5] -0.101 0.369 -0.873 -0.352 -0.071 0.184 0.509
#> EM.pred[7,6] -0.365 0.317 -1.002 -0.553 -0.366 -0.148 0.216
#> EM.pred[8,6] -0.390 0.310 -1.046 -0.578 -0.364 -0.196 0.201
#> EM.pred[8,7] -0.017 0.253 -0.552 -0.170 -0.008 0.157 0.447
#> SUCRA[1] 0.102 0.111 0.000 0.000 0.143 0.143 0.286
#> SUCRA[2] 0.879 0.204 0.286 0.857 1.000 1.000 1.000
#> SUCRA[3] 0.741 0.268 0.000 0.571 0.857 1.000 1.000
#> SUCRA[4] 0.359 0.233 0.000 0.143 0.286 0.429 0.857
#> SUCRA[5] 0.508 0.288 0.000 0.286 0.571 0.714 1.000
#> SUCRA[6] 0.206 0.190 0.000 0.000 0.143 0.286 0.714
#> SUCRA[7] 0.597 0.182 0.286 0.429 0.571 0.714 0.857
#> SUCRA[8] 0.607 0.193 0.286 0.429 0.571 0.714 1.000
#> abs_risk[1] 0.392 0.000 0.392 0.392 0.392 0.392 0.392
#> abs_risk[2] 0.210 0.077 0.091 0.149 0.200 0.264 0.378
#> abs_risk[3] 0.249 0.080 0.114 0.192 0.239 0.303 0.422
#> abs_risk[4] 0.337 0.061 0.238 0.291 0.336 0.372 0.475
#> abs_risk[5] 0.308 0.063 0.206 0.262 0.299 0.349 0.447
#> abs_risk[6] 0.370 0.058 0.258 0.331 0.371 0.411 0.480
#> abs_risk[7] 0.289 0.035 0.225 0.263 0.287 0.315 0.354
#> abs_risk[8] 0.284 0.036 0.226 0.260 0.280 0.304 0.369
#> delta[1,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[2,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[3,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[4,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[5,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[6,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[7,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[8,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[9,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[10,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[11,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[12,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[13,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[14,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[15,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[16,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[17,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[18,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[19,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[20,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[21,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> delta[1,2] -0.284 0.315 -0.901 -0.483 -0.266 -0.100 0.375
#> delta[2,2] -0.297 0.259 -0.779 -0.484 -0.291 -0.132 0.252
#> delta[3,2] -0.542 0.212 -0.974 -0.684 -0.532 -0.374 -0.191
#> delta[4,2] -0.461 0.188 -0.817 -0.596 -0.466 -0.336 -0.088
#> delta[5,2] -0.487 0.225 -0.950 -0.632 -0.476 -0.333 -0.076
#> delta[6,2] -0.402 0.201 -0.795 -0.547 -0.392 -0.264 -0.017
#> delta[7,2] -0.454 0.174 -0.761 -0.576 -0.469 -0.349 -0.074
#> delta[8,2] -0.434 0.183 -0.778 -0.565 -0.430 -0.314 -0.074
#> delta[9,2] -0.477 0.193 -0.864 -0.612 -0.472 -0.338 -0.119
#> delta[10,2] -0.181 0.321 -0.724 -0.401 -0.202 0.003 0.543
#> delta[11,2] -0.156 0.256 -0.685 -0.319 -0.150 0.015 0.295
#> delta[12,2] -0.212 0.280 -0.711 -0.403 -0.213 -0.047 0.408
#> delta[13,2] -0.995 0.456 -1.834 -1.333 -1.005 -0.643 -0.163
#> delta[14,2] -0.075 0.216 -0.522 -0.224 -0.062 0.084 0.304
#> delta[15,2] -0.035 0.253 -0.546 -0.209 -0.033 0.146 0.430
#> delta[16,2] -0.339 0.140 -0.608 -0.434 -0.345 -0.245 -0.058
#> delta[17,2] -0.465 0.297 -1.083 -0.662 -0.428 -0.254 0.052
#> delta[18,2] -0.436 0.316 -1.050 -0.647 -0.450 -0.223 0.194
#> delta[19,2] -0.427 0.334 -1.071 -0.650 -0.432 -0.213 0.221
#> delta[20,2] -0.440 0.218 -0.885 -0.578 -0.424 -0.299 -0.024
#> delta[21,2] -0.543 0.226 -0.995 -0.683 -0.551 -0.395 -0.097
#> delta[9,3] -0.524 0.204 -0.899 -0.665 -0.534 -0.397 -0.101
#> delta[10,3] -0.386 0.311 -0.945 -0.611 -0.401 -0.163 0.218
#> delta[12,3] -0.291 0.309 -0.843 -0.508 -0.320 -0.096 0.367
#> delta[13,3] -0.753 0.415 -1.589 -1.036 -0.754 -0.443 0.022
#> delta[19,3] -0.526 0.257 -1.089 -0.677 -0.504 -0.346 -0.095
#> delta[10,4] -0.505 0.225 -0.955 -0.654 -0.497 -0.339 -0.107
#> delta[12,4] -0.389 0.217 -0.804 -0.544 -0.378 -0.248 0.048
#> delta[13,4] -0.184 0.288 -0.766 -0.375 -0.180 0.019 0.310
#> dev.o[1,1] 2.307 2.193 0.007 0.612 1.653 3.368 8.123
#> dev.o[2,1] 0.908 1.312 0.001 0.086 0.407 1.235 4.433
#> dev.o[3,1] 1.084 1.441 0.001 0.117 0.558 1.476 5.155
#> dev.o[4,1] 0.709 1.029 0.000 0.077 0.314 0.934 3.772
#> dev.o[5,1] 0.683 1.006 0.001 0.068 0.307 0.862 3.464
#> dev.o[6,1] 0.976 1.327 0.001 0.107 0.447 1.349 4.787
#> dev.o[7,1] 0.751 1.106 0.001 0.080 0.334 0.996 3.731
#> dev.o[8,1] 0.694 0.959 0.001 0.066 0.312 0.932 3.377
#> dev.o[9,1] 0.825 1.100 0.001 0.090 0.396 1.133 3.943
#> dev.o[10,1] 0.605 0.829 0.001 0.067 0.292 0.809 3.099
#> dev.o[11,1] 0.929 1.271 0.001 0.094 0.436 1.220 4.715
#> dev.o[12,1] 1.211 1.447 0.002 0.164 0.689 1.780 5.203
#> dev.o[13,1] 1.206 1.564 0.002 0.135 0.610 1.661 5.579
#> dev.o[14,1] 0.911 1.245 0.001 0.098 0.424 1.238 4.619
#> dev.o[15,1] 1.023 1.413 0.001 0.108 0.486 1.373 5.065
#> dev.o[16,1] 1.218 1.652 0.001 0.131 0.561 1.717 5.851
#> dev.o[17,1] 2.130 2.286 0.004 0.394 1.416 3.140 8.101
#> dev.o[18,1] 1.198 1.544 0.002 0.142 0.618 1.618 5.396
#> dev.o[19,1] 1.688 1.826 0.005 0.333 1.102 2.472 6.555
#> dev.o[20,1] 0.710 1.040 0.001 0.070 0.305 0.921 3.763
#> dev.o[21,1] 1.082 1.397 0.001 0.131 0.545 1.501 4.988
#> dev.o[1,2] 3.335 1.970 0.697 1.886 2.933 4.356 8.125
#> dev.o[2,2] 0.867 1.215 0.001 0.095 0.414 1.143 4.418
#> dev.o[3,2] 1.069 1.384 0.001 0.137 0.572 1.451 4.948
#> dev.o[4,2] 0.759 1.085 0.001 0.082 0.344 0.978 3.700
#> dev.o[5,2] 0.612 0.875 0.001 0.058 0.269 0.795 3.210
#> dev.o[6,2] 0.999 1.273 0.001 0.114 0.520 1.376 4.579
#> dev.o[7,2] 0.873 1.253 0.001 0.084 0.368 1.137 4.362
#> dev.o[8,2] 0.715 0.997 0.001 0.071 0.332 0.966 3.567
#> dev.o[9,2] 0.693 1.005 0.001 0.067 0.302 0.906 3.568
#> dev.o[10,2] 1.560 1.926 0.002 0.207 0.876 2.258 6.629
#> dev.o[11,2] 0.964 1.362 0.001 0.102 0.446 1.274 4.731
#> dev.o[12,2] 0.793 1.112 0.001 0.074 0.359 1.040 3.827
#> dev.o[13,2] 0.974 1.430 0.001 0.092 0.418 1.300 4.950
#> dev.o[14,2] 0.840 1.131 0.001 0.077 0.393 1.157 4.165
#> dev.o[15,2] 1.063 1.407 0.001 0.118 0.516 1.459 4.971
#> dev.o[16,2] 1.303 1.768 0.001 0.124 0.594 1.769 6.458
#> dev.o[17,2] 2.337 2.036 0.020 0.780 1.841 3.359 7.549
#> dev.o[18,2] 1.119 1.343 0.001 0.159 0.638 1.628 4.809
#> dev.o[19,2] 0.496 0.729 0.000 0.052 0.229 0.639 2.534
#> dev.o[20,2] 0.697 0.966 0.001 0.069 0.318 0.923 3.596
#> dev.o[21,2] 0.985 1.244 0.001 0.132 0.521 1.373 4.304
#> dev.o[9,3] 0.899 1.171 0.001 0.101 0.449 1.245 4.248
#> dev.o[10,3] 0.700 0.958 0.001 0.075 0.315 0.916 3.499
#> dev.o[12,3] 1.121 1.503 0.001 0.122 0.550 1.555 5.325
#> dev.o[13,3] 1.035 1.483 0.001 0.107 0.475 1.343 5.408
#> dev.o[19,3] 1.220 1.099 0.014 0.391 0.912 1.743 4.020
#> dev.o[10,4] 1.132 1.474 0.001 0.131 0.578 1.569 5.178
#> dev.o[12,4] 0.823 1.077 0.001 0.096 0.406 1.107 3.828
#> dev.o[13,4] 1.136 1.540 0.001 0.121 0.554 1.518 5.514
#> effectiveness[1,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> effectiveness[2,1] 0.601 0.490 0.000 0.000 1.000 1.000 1.000
#> effectiveness[3,1] 0.267 0.442 0.000 0.000 0.000 1.000 1.000
#> effectiveness[4,1] 0.006 0.079 0.000 0.000 0.000 0.000 0.000
#> effectiveness[5,1] 0.072 0.258 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,1] 0.001 0.036 0.000 0.000 0.000 0.000 0.000
#> effectiveness[7,1] 0.014 0.119 0.000 0.000 0.000 0.000 0.000
#> effectiveness[8,1] 0.038 0.192 0.000 0.000 0.000 0.000 1.000
#> effectiveness[1,2] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> effectiveness[2,2] 0.210 0.407 0.000 0.000 0.000 0.000 1.000
#> effectiveness[3,2] 0.343 0.475 0.000 0.000 0.000 1.000 1.000
#> effectiveness[4,2] 0.041 0.198 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,2] 0.111 0.315 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,2] 0.008 0.087 0.000 0.000 0.000 0.000 0.000
#> effectiveness[7,2] 0.136 0.343 0.000 0.000 0.000 0.000 1.000
#> effectiveness[8,2] 0.151 0.358 0.000 0.000 0.000 0.000 1.000
#> effectiveness[1,3] 0.001 0.032 0.000 0.000 0.000 0.000 0.000
#> effectiveness[2,3] 0.071 0.257 0.000 0.000 0.000 0.000 1.000
#> effectiveness[3,3] 0.098 0.298 0.000 0.000 0.000 0.000 1.000
#> effectiveness[4,3] 0.080 0.272 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,3] 0.197 0.398 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,3] 0.019 0.138 0.000 0.000 0.000 0.000 0.000
#> effectiveness[7,3] 0.272 0.445 0.000 0.000 0.000 1.000 1.000
#> effectiveness[8,3] 0.261 0.439 0.000 0.000 0.000 1.000 1.000
#> effectiveness[1,4] 0.003 0.055 0.000 0.000 0.000 0.000 0.000
#> effectiveness[2,4] 0.042 0.201 0.000 0.000 0.000 0.000 1.000
#> effectiveness[3,4] 0.103 0.304 0.000 0.000 0.000 0.000 1.000
#> effectiveness[4,4] 0.122 0.328 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,4] 0.132 0.339 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,4] 0.051 0.219 0.000 0.000 0.000 0.000 1.000
#> effectiveness[7,4] 0.302 0.459 0.000 0.000 0.000 1.000 1.000
#> effectiveness[8,4] 0.245 0.430 0.000 0.000 0.000 0.000 1.000
#> effectiveness[1,5] 0.017 0.129 0.000 0.000 0.000 0.000 0.000
#> effectiveness[2,5] 0.033 0.179 0.000 0.000 0.000 0.000 1.000
#> effectiveness[3,5] 0.075 0.263 0.000 0.000 0.000 0.000 1.000
#> effectiveness[4,5] 0.246 0.431 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,5] 0.170 0.376 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,5] 0.110 0.313 0.000 0.000 0.000 0.000 1.000
#> effectiveness[7,5] 0.159 0.366 0.000 0.000 0.000 0.000 1.000
#> effectiveness[8,5] 0.190 0.392 0.000 0.000 0.000 0.000 1.000
#> effectiveness[1,6] 0.119 0.323 0.000 0.000 0.000 0.000 1.000
#> effectiveness[2,6] 0.023 0.150 0.000 0.000 0.000 0.000 0.000
#> effectiveness[3,6] 0.054 0.227 0.000 0.000 0.000 0.000 1.000
#> effectiveness[4,6] 0.222 0.416 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,6] 0.140 0.347 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,6] 0.232 0.422 0.000 0.000 0.000 0.000 1.000
#> effectiveness[7,6] 0.100 0.300 0.000 0.000 0.000 0.000 1.000
#> effectiveness[8,6] 0.110 0.313 0.000 0.000 0.000 0.000 1.000
#> effectiveness[1,7] 0.410 0.492 0.000 0.000 0.000 1.000 1.000
#> effectiveness[2,7] 0.013 0.113 0.000 0.000 0.000 0.000 0.000
#> effectiveness[3,7] 0.026 0.159 0.000 0.000 0.000 0.000 1.000
#> effectiveness[4,7] 0.155 0.362 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,7] 0.085 0.279 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,7] 0.291 0.454 0.000 0.000 0.000 1.000 1.000
#> effectiveness[7,7] 0.014 0.119 0.000 0.000 0.000 0.000 0.000
#> effectiveness[8,7] 0.006 0.075 0.000 0.000 0.000 0.000 0.000
#> effectiveness[1,8] 0.451 0.498 0.000 0.000 0.000 1.000 1.000
#> effectiveness[2,8] 0.006 0.079 0.000 0.000 0.000 0.000 0.000
#> effectiveness[3,8] 0.034 0.180 0.000 0.000 0.000 0.000 1.000
#> effectiveness[4,8] 0.128 0.334 0.000 0.000 0.000 0.000 1.000
#> effectiveness[5,8] 0.093 0.290 0.000 0.000 0.000 0.000 1.000
#> effectiveness[6,8] 0.287 0.452 0.000 0.000 0.000 1.000 1.000
#> effectiveness[7,8] 0.002 0.041 0.000 0.000 0.000 0.000 0.000
#> effectiveness[8,8] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
#> hat.par[1,1] 1.569 0.758 0.397 0.996 1.484 2.038 3.260
#> hat.par[2,1] 49.215 4.632 40.208 46.053 49.302 52.300 58.252
#> hat.par[3,1] 44.040 4.559 35.364 40.806 43.823 47.152 53.288
#> hat.par[4,1] 42.102 4.648 33.165 38.915 41.975 45.121 51.591
#> hat.par[5,1] 17.199 2.463 12.426 15.539 17.133 18.809 22.133
#> hat.par[6,1] 43.850 4.136 35.800 41.073 43.779 46.623 52.185
#> hat.par[7,1] 157.038 7.325 142.836 152.071 157.096 162.084 171.152
#> hat.par[8,1] 68.381 5.387 57.890 64.654 68.397 71.996 78.996
#> hat.par[9,1] 89.393 5.171 79.374 85.890 89.379 92.871 99.582
#> hat.par[10,1] 78.516 3.736 71.036 75.969 78.608 81.115 85.347
#> hat.par[11,1] 73.119 5.762 61.964 69.183 73.027 76.899 84.808
#> hat.par[12,1] 77.286 4.165 68.714 74.619 77.334 80.192 85.240
#> hat.par[13,1] 49.637 4.713 40.743 46.421 49.554 52.763 59.225
#> hat.par[14,1] 33.992 4.700 25.255 30.664 33.756 37.057 43.606
#> hat.par[15,1] 36.857 4.751 27.656 33.624 36.806 39.923 46.416
#> hat.par[16,1] 303.905 13.008 279.131 294.618 303.793 312.959 329.536
#> hat.par[17,1] 10.546 2.486 6.249 8.756 10.356 12.145 15.831
#> hat.par[18,1] 21.512 3.291 15.549 19.306 21.360 23.657 28.327
#> hat.par[19,1] 4.018 1.387 1.761 3.004 3.864 4.839 7.191
#> hat.par[20,1] 25.283 3.528 18.387 22.928 25.173 27.537 32.467
#> hat.par[21,1] 32.088 4.274 24.377 29.142 31.848 34.834 40.674
#> hat.par[1,2] 1.458 0.750 0.341 0.890 1.340 1.907 3.185
#> hat.par[2,2] 46.708 4.721 37.661 43.430 46.714 49.883 56.239
#> hat.par[3,2] 31.219 3.924 23.686 28.591 31.192 33.751 39.204
#> hat.par[4,2] 43.893 5.093 34.413 40.448 43.633 47.317 54.142
#> hat.par[5,2] 11.963 2.079 8.146 10.514 11.832 13.352 16.362
#> hat.par[6,2] 35.119 3.871 27.922 32.468 35.033 37.604 43.135
#> hat.par[7,2] 196.704 9.989 177.577 190.284 196.626 203.230 216.543
#> hat.par[8,2] 51.511 5.079 42.043 48.015 51.434 54.960 61.594
#> hat.par[9,2] 81.141 5.620 69.983 77.389 81.099 84.823 92.278
#> hat.par[10,2] 72.991 3.926 65.141 70.332 73.115 75.734 80.398
#> hat.par[11,2] 119.322 8.289 103.475 113.692 119.369 124.974 135.382
#> hat.par[12,2] 80.452 4.798 70.957 77.280 80.445 83.770 89.588
#> hat.par[13,2] 26.069 4.486 17.664 23.044 25.884 28.960 35.180
#> hat.par[14,2] 32.037 4.539 23.301 28.976 31.836 35.080 41.328
#> hat.par[15,2] 32.202 4.388 24.301 29.156 31.970 34.993 41.365
#> hat.par[16,2] 247.087 12.719 222.236 238.576 247.029 255.353 273.443
#> hat.par[17,2] 7.325 1.935 3.951 5.945 7.180 8.551 11.538
#> hat.par[18,2] 13.387 2.499 8.675 11.604 13.348 15.082 18.422
#> hat.par[19,2] 2.659 1.020 1.127 1.903 2.537 3.242 5.027
#> hat.par[20,2] 18.736 2.978 13.287 16.652 18.611 20.680 24.970
#> hat.par[21,2] 21.741 3.475 14.874 19.443 21.670 24.024 28.670
#> hat.par[9,3] 79.962 5.863 68.153 76.060 80.105 83.958 91.181
#> hat.par[10,3] 68.801 4.234 60.501 65.927 68.870 71.758 76.972
#> hat.par[12,3] 67.556 4.785 58.115 64.276 67.633 70.835 77.109
#> hat.par[13,3] 35.015 5.240 25.256 31.387 34.808 38.470 45.394
#> hat.par[19,3] 2.388 0.876 1.042 1.749 2.255 2.906 4.383
#> hat.par[10,4] 66.635 4.250 57.944 63.805 66.645 69.506 74.848
#> hat.par[12,4] 62.545 4.306 54.465 59.658 62.421 65.384 71.122
#> hat.par[13,4] 41.286 4.759 32.183 38.094 41.220 44.361 50.989
#> phi[1] -0.312 0.567 -1.515 -0.671 -0.257 0.105 0.667
#> phi[2] 0.032 0.951 -1.888 -0.615 0.058 0.709 1.777
#> phi[3] -0.013 0.985 -2.010 -0.669 0.007 0.661 1.823
#> phi[4] -1.045 0.851 -2.552 -1.631 -1.110 -0.499 0.730
#> phi[5] -0.302 0.949 -1.965 -0.952 -0.376 0.288 1.746
#> phi[6] 0.765 0.789 -0.942 0.303 0.803 1.263 2.285
#> phi[7] -0.362 0.682 -1.653 -0.815 -0.376 0.085 1.011
#> phi[8] -0.066 1.035 -1.995 -0.795 -0.084 0.658 1.976
#> tau 0.152 0.092 0.018 0.085 0.142 0.200 0.391
#> totresdev.o 53.968 8.819 37.505 47.899 53.626 59.845 72.322
#> deviance 581.602 13.604 557.081 572.284 581.035 590.221 609.331
#> Rhat n.eff
#> EM[2,1] 1.043 51
#> EM[3,1] 1.034 67
#> EM[4,1] 1.100 30
#> EM[5,1] 1.008 430
#> EM[6,1] 1.109 23
#> EM[7,1] 1.041 62
#> EM[8,1] 1.192 17
#> EM[3,2] 1.004 650
#> EM[4,2] 1.005 470
#> EM[5,2] 1.029 75
#> EM[6,2] 1.030 70
#> EM[7,2] 1.024 92
#> EM[8,2] 1.042 56
#> EM[4,3] 1.012 3000
#> EM[5,3] 1.024 110
#> EM[6,3] 1.014 160
#> EM[7,3] 1.015 180
#> EM[8,3] 1.019 120
#> EM[5,4] 1.044 97
#> EM[6,4] 1.034 110
#> EM[7,4] 1.033 96
#> EM[8,4] 1.043 63
#> EM[6,5] 1.026 87
#> EM[7,5] 1.006 400
#> EM[8,5] 1.019 120
#> EM[7,6] 1.057 42
#> EM[8,6] 1.015 430
#> EM[8,7] 1.089 27
#> EM.pred[2,1] 1.035 63
#> EM.pred[3,1] 1.030 77
#> EM.pred[4,1] 1.077 39
#> EM.pred[5,1] 1.003 780
#> EM.pred[6,1] 1.066 37
#> EM.pred[7,1] 1.024 170
#> EM.pred[8,1] 1.065 35
#> EM.pred[3,2] 1.004 610
#> EM.pred[4,2] 1.005 550
#> EM.pred[5,2] 1.027 80
#> EM.pred[6,2] 1.027 79
#> EM.pred[7,2] 1.019 120
#> EM.pred[8,2] 1.036 67
#> EM.pred[4,3] 1.012 3000
#> EM.pred[5,3] 1.021 140
#> EM.pred[6,3] 1.010 220
#> EM.pred[7,3] 1.012 250
#> EM.pred[8,3] 1.015 170
#> EM.pred[5,4] 1.039 110
#> EM.pred[6,4] 1.029 150
#> EM.pred[7,4] 1.026 180
#> EM.pred[8,4] 1.032 96
#> EM.pred[6,5] 1.021 110
#> EM.pred[7,5] 1.008 510
#> EM.pred[8,5] 1.014 150
#> EM.pred[7,6] 1.040 67
#> EM.pred[8,6] 1.018 310
#> EM.pred[8,7] 1.045 52
#> SUCRA[1] 1.042 56
#> SUCRA[2] 1.033 140
#> SUCRA[3] 1.006 510
#> SUCRA[4] 1.033 130
#> SUCRA[5] 1.020 110
#> SUCRA[6] 1.048 47
#> SUCRA[7] 1.039 57
#> SUCRA[8] 1.083 35
#> abs_risk[1] 1.000 1
#> abs_risk[2] 1.045 51
#> abs_risk[3] 1.037 63
#> abs_risk[4] 1.091 31
#> abs_risk[5] 1.009 410
#> abs_risk[6] 1.111 23
#> abs_risk[7] 1.043 60
#> abs_risk[8] 1.180 17
#> delta[1,1] 1.000 1
#> delta[2,1] 1.000 1
#> delta[3,1] 1.000 1
#> delta[4,1] 1.000 1
#> delta[5,1] 1.000 1
#> delta[6,1] 1.000 1
#> delta[7,1] 1.000 1
#> delta[8,1] 1.000 1
#> delta[9,1] 1.000 1
#> delta[10,1] 1.000 1
#> delta[11,1] 1.000 1
#> delta[12,1] 1.000 1
#> delta[13,1] 1.000 1
#> delta[14,1] 1.000 1
#> delta[15,1] 1.000 1
#> delta[16,1] 1.000 1
#> delta[17,1] 1.000 1
#> delta[18,1] 1.000 1
#> delta[19,1] 1.000 1
#> delta[20,1] 1.000 1
#> delta[21,1] 1.000 1
#> delta[1,2] 1.095 29
#> delta[2,2] 1.081 37
#> delta[3,2] 1.062 48
#> delta[4,2] 1.113 25
#> delta[5,2] 1.037 69
#> delta[6,2] 1.031 92
#> delta[7,2] 1.128 25
#> delta[8,2] 1.026 100
#> delta[9,2] 1.028 97
#> delta[10,2] 1.100 32
#> delta[11,2] 1.149 18
#> delta[12,2] 1.088 39
#> delta[13,2] 1.049 46
#> delta[14,2] 1.109 23
#> delta[15,2] 1.069 36
#> delta[16,2] 1.066 46
#> delta[17,2] 1.013 950
#> delta[18,2] 1.013 240
#> delta[19,2] 1.011 250
#> delta[20,2] 1.021 180
#> delta[21,2] 1.142 20
#> delta[9,3] 1.135 22
#> delta[10,3] 1.014 320
#> delta[12,3] 1.005 500
#> delta[13,3] 1.041 56
#> delta[19,3] 1.032 100
#> delta[10,4] 1.054 55
#> delta[12,4] 1.036 76
#> delta[13,4] 1.123 21
#> dev.o[1,1] 1.001 3000
#> dev.o[2,1] 1.004 560
#> dev.o[3,1] 1.001 3000
#> dev.o[4,1] 1.005 800
#> dev.o[5,1] 1.001 3000
#> dev.o[6,1] 1.003 2600
#> dev.o[7,1] 1.005 510
#> dev.o[8,1] 1.002 3000
#> dev.o[9,1] 1.002 1200
#> dev.o[10,1] 1.002 1900
#> dev.o[11,1] 1.001 3000
#> dev.o[12,1] 1.003 2400
#> dev.o[13,1] 1.002 1400
#> dev.o[14,1] 1.001 3000
#> dev.o[15,1] 1.007 320
#> dev.o[16,1] 1.003 1800
#> dev.o[17,1] 1.001 3000
#> dev.o[18,1] 1.003 1600
#> dev.o[19,1] 1.007 1100
#> dev.o[20,1] 1.005 1100
#> dev.o[21,1] 1.001 3000
#> dev.o[1,2] 1.002 1100
#> dev.o[2,2] 1.003 840
#> dev.o[3,2] 1.007 760
#> dev.o[4,2] 1.002 1200
#> dev.o[5,2] 1.001 3000
#> dev.o[6,2] 1.001 3000
#> dev.o[7,2] 1.002 1600
#> dev.o[8,2] 1.006 470
#> dev.o[9,2] 1.002 2100
#> dev.o[10,2] 1.002 1100
#> dev.o[11,2] 1.002 2300
#> dev.o[12,2] 1.001 3000
#> dev.o[13,2] 1.001 3000
#> dev.o[14,2] 1.002 2900
#> dev.o[15,2] 1.005 450
#> dev.o[16,2] 1.002 1600
#> dev.o[17,2] 1.008 3000
#> dev.o[18,2] 1.007 330
#> dev.o[19,2] 1.001 3000
#> dev.o[20,2] 1.002 1800
#> dev.o[21,2] 1.002 1600
#> dev.o[9,3] 1.009 240
#> dev.o[10,3] 1.001 3000
#> dev.o[12,3] 1.002 3000
#> dev.o[13,3] 1.001 3000
#> dev.o[19,3] 1.003 850
#> dev.o[10,4] 1.001 3000
#> dev.o[12,4] 1.005 540
#> dev.o[13,4] 1.004 520
#> effectiveness[1,1] 1.000 1
#> effectiveness[2,1] 1.002 1200
#> effectiveness[3,1] 1.002 1200
#> effectiveness[4,1] 1.066 1100
#> effectiveness[5,1] 1.073 97
#> effectiveness[6,1] 1.105 3000
#> effectiveness[7,1] 1.119 250
#> effectiveness[8,1] 1.036 360
#> effectiveness[1,2] 1.000 1
#> effectiveness[2,2] 1.012 240
#> effectiveness[3,2] 1.009 240
#> effectiveness[4,2] 1.078 150
#> effectiveness[5,2] 1.056 90
#> effectiveness[6,2] 1.017 3000
#> effectiveness[7,2] 1.059 72
#> effectiveness[8,2] 1.018 220
#> effectiveness[1,3] 1.292 1000
#> effectiveness[2,3] 1.021 350
#> effectiveness[3,3] 1.005 1100
#> effectiveness[4,3] 1.053 120
#> effectiveness[5,3] 1.001 2100
#> effectiveness[6,3] 1.022 1200
#> effectiveness[7,3] 1.010 240
#> effectiveness[8,3] 1.007 370
#> effectiveness[1,4] 1.018 3000
#> effectiveness[2,4] 1.021 550
#> effectiveness[3,4] 1.024 220
#> effectiveness[4,4] 1.013 330
#> effectiveness[5,4] 1.010 410
#> effectiveness[6,4] 1.020 500
#> effectiveness[7,4] 1.010 220
#> effectiveness[8,4] 1.010 250
#> effectiveness[1,5] 1.119 210
#> effectiveness[2,5] 1.053 280
#> effectiveness[3,5] 1.048 150
#> effectiveness[4,5] 1.074 40
#> effectiveness[5,5] 1.003 1100
#> effectiveness[6,5] 1.020 250
#> effectiveness[7,5] 1.047 80
#> effectiveness[8,5] 1.001 3000
#> effectiveness[1,6] 1.025 190
#> effectiveness[2,6] 1.044 470
#> effectiveness[3,6] 1.004 2100
#> effectiveness[4,6] 1.001 3000
#> effectiveness[5,6] 1.016 260
#> effectiveness[6,6] 1.022 130
#> effectiveness[7,6] 1.020 270
#> effectiveness[8,6] 1.231 22
#> effectiveness[1,7] 1.019 110
#> effectiveness[2,7] 1.068 510
#> effectiveness[3,7] 1.001 3000
#> effectiveness[4,7] 1.022 170
#> effectiveness[5,7] 1.048 130
#> effectiveness[6,7] 1.004 570
#> effectiveness[7,7] 1.024 1400
#> effectiveness[8,7] 1.235 260
#> effectiveness[1,8] 1.043 52
#> effectiveness[2,8] 1.033 2300
#> effectiveness[3,8] 1.011 1300
#> effectiveness[4,8] 1.115 40
#> effectiveness[5,8] 1.024 240
#> effectiveness[6,8] 1.054 50
#> effectiveness[7,8] 1.293 600
#> effectiveness[8,8] 1.000 1
#> hat.par[1,1] 1.002 1700
#> hat.par[2,1] 1.008 320
#> hat.par[3,1] 1.008 380
#> hat.par[4,1] 1.018 120
#> hat.par[5,1] 1.006 360
#> hat.par[6,1] 1.004 650
#> hat.par[7,1] 1.005 700
#> hat.par[8,1] 1.003 720
#> hat.par[9,1] 1.008 350
#> hat.par[10,1] 1.002 1300
#> hat.par[11,1] 1.025 89
#> hat.par[12,1] 1.013 170
#> hat.par[13,1] 1.001 2600
#> hat.par[14,1] 1.014 150
#> hat.par[15,1] 1.008 420
#> hat.par[16,1] 1.002 1200
#> hat.par[17,1] 1.001 3000
#> hat.par[18,1] 1.002 1700
#> hat.par[19,1] 1.002 2000
#> hat.par[20,1] 1.001 3000
#> hat.par[21,1] 1.008 320
#> hat.par[1,2] 1.002 1000
#> hat.par[2,2] 1.009 250
#> hat.par[3,2] 1.006 480
#> hat.par[4,2] 1.008 250
#> hat.par[5,2] 1.008 290
#> hat.par[6,2] 1.002 1000
#> hat.par[7,2] 1.003 690
#> hat.par[8,2] 1.002 1600
#> hat.par[9,2] 1.020 100
#> hat.par[10,2] 1.003 760
#> hat.par[11,2] 1.006 350
#> hat.par[12,2] 1.001 3000
#> hat.par[13,2] 1.010 240
#> hat.par[14,2] 1.007 300
#> hat.par[15,2] 1.007 370
#> hat.par[16,2] 1.005 480
#> hat.par[17,2] 1.003 1300
#> hat.par[18,2] 1.009 240
#> hat.par[19,2] 1.002 2000
#> hat.par[20,2] 1.001 3000
#> hat.par[21,2] 1.018 130
#> hat.par[9,3] 1.024 92
#> hat.par[10,3] 1.001 3000
#> hat.par[12,3] 1.005 450
#> hat.par[13,3] 1.001 3000
#> hat.par[19,3] 1.003 740
#> hat.par[10,4] 1.001 3000
#> hat.par[12,4] 1.003 900
#> hat.par[13,4] 1.007 290
#> phi[1] 1.145 19
#> phi[2] 1.006 400
#> phi[3] 1.002 1600
#> phi[4] 1.028 190
#> phi[5] 1.020 110
#> phi[6] 1.020 110
#> phi[7] 1.031 81
#> phi[8] 1.045 68
#> tau 1.189 17
#> totresdev.o 1.007 440
#> deviance 1.001 2400
#>
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#>
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 92.5 and DIC = 674.1
#> DIC is an estimate of expected predictive error (lower deviance is better).
#>
#> $n_chains
#> [1] 3
#>
#> $n_iter
#> [1] 1000
#>
#> $n_burnin
#> [1] 100
#>
#> $n_thin
#> [1] 1
#>
#> $EM_pred
#> mean sd 2.5% 25% 50%
#> EM.pred[2,1] -0.94867209 0.5098007 -1.94786467 -1.309605963 -0.937567740
#> EM.pred[3,1] -0.71009573 0.4778580 -1.66130483 -1.011319694 -0.713513513
#> EM.pred[4,1] -0.24713195 0.3250902 -0.88608156 -0.455206486 -0.237027983
#> EM.pred[5,1] -0.38268004 0.3412972 -1.05169127 -0.613217447 -0.396878636
#> EM.pred[6,1] -0.09743734 0.3100972 -0.73257081 -0.295117858 -0.088393101
#> EM.pred[7,1] -0.46934891 0.2505422 -0.99770313 -0.624093613 -0.455382140
#> EM.pred[8,1] -0.49368540 0.2484578 -1.01307468 -0.641861496 -0.487174576
#> EM.pred[3,2] 0.24170397 0.5405280 -0.82080591 -0.100180460 0.225984525
#> EM.pred[4,2] 0.70500682 0.5507852 -0.26881540 0.291023840 0.666525202
#> EM.pred[5,2] 0.57144203 0.5374811 -0.38541809 0.170152936 0.576762464
#> EM.pred[6,2] 0.85334770 0.5006808 -0.09186393 0.506248419 0.818993588
#> EM.pred[7,2] 0.48535949 0.4690012 -0.41831872 0.166556016 0.455494457
#> EM.pred[8,2] 0.46913408 0.4769225 -0.45903601 0.151477192 0.452303850
#> EM.pred[4,3] 0.46483110 0.5083757 -0.53934055 0.128320229 0.464673613
#> EM.pred[5,3] 0.32430171 0.4904345 -0.60649394 -0.031207556 0.324199852
#> EM.pred[6,3] 0.61302821 0.4730916 -0.33440557 0.318184612 0.596903633
#> EM.pred[7,3] 0.24385345 0.4694916 -0.70300735 -0.056641003 0.247621200
#> EM.pred[8,3] 0.22453776 0.4733614 -0.73374697 -0.064248661 0.227550133
#> EM.pred[5,4] -0.14581422 0.4233004 -1.03397532 -0.390016067 -0.166473970
#> EM.pred[6,4] 0.15031398 0.3831393 -0.62409279 -0.079579403 0.170332759
#> EM.pred[7,4] -0.22263105 0.3398755 -0.96551689 -0.425269339 -0.195410470
#> EM.pred[8,4] -0.24522887 0.3371478 -0.98368270 -0.443986455 -0.199766552
#> EM.pred[6,5] 0.28441748 0.4091535 -0.50603244 0.006539393 0.302306705
#> EM.pred[7,5] -0.08718590 0.3508476 -0.85860639 -0.309127595 -0.053317884
#> EM.pred[8,5] -0.10145750 0.3689252 -0.87275988 -0.352172412 -0.071037656
#> EM.pred[7,6] -0.36497136 0.3167135 -1.00224195 -0.553399861 -0.365544394
#> EM.pred[8,6] -0.39024727 0.3096278 -1.04633537 -0.577616164 -0.364235166
#> EM.pred[8,7] -0.01658677 0.2532535 -0.55234734 -0.170032393 -0.007770312
#> 75% 97.5% Rhat n.eff
#> EM.pred[2,1] -0.5709774622 -0.017903753 1.035012 63
#> EM.pred[3,1] -0.3752361598 0.202888150 1.029555 77
#> EM.pred[4,1] -0.0650677444 0.446900338 1.077129 39
#> EM.pred[5,1] -0.1447394563 0.269287627 1.003200 780
#> EM.pred[6,1] 0.1204675864 0.489177416 1.066385 37
#> EM.pred[7,1] -0.3095269586 0.008973348 1.024444 170
#> EM.pred[8,1] -0.3374683609 -0.052089411 1.065253 35
#> EM.pred[3,2] 0.6002442518 1.295207649 1.003870 610
#> EM.pred[4,2] 1.0837339401 1.831265700 1.004716 550
#> EM.pred[5,2] 0.9383670127 1.584205457 1.027365 80
#> EM.pred[6,2] 1.1791224218 1.867277106 1.026726 79
#> EM.pred[7,2] 0.7878090532 1.444023358 1.018940 120
#> EM.pred[8,2] 0.7937142329 1.423529437 1.035890 67
#> EM.pred[4,3] 0.7854805913 1.511201550 1.011887 3000
#> EM.pred[5,3] 0.6776002588 1.238866189 1.021019 140
#> EM.pred[6,3] 0.9202473528 1.545662972 1.009931 220
#> EM.pred[7,3] 0.5582726677 1.177548346 1.012020 250
#> EM.pred[8,3] 0.5275488178 1.151582051 1.014872 170
#> EM.pred[5,4] 0.1306086701 0.650359235 1.039136 110
#> EM.pred[6,4] 0.3842137796 0.907733192 1.028867 150
#> EM.pred[7,4] -0.0113931622 0.437435433 1.026344 180
#> EM.pred[8,4] 0.0003268401 0.312063248 1.031933 96
#> EM.pred[6,5] 0.5681076850 1.072877567 1.020582 110
#> EM.pred[7,5] 0.1504219860 0.546921583 1.008362 510
#> EM.pred[8,5] 0.1835608279 0.508759595 1.013743 150
#> EM.pred[7,6] -0.1483968492 0.216356308 1.039638 67
#> EM.pred[8,6] -0.1956733407 0.201219195 1.017834 310
#> EM.pred[8,7] 0.1568400272 0.446895914 1.045443 52
#>
#> $tau
#> mean sd 2.5% 25% 50% 75%
#> 0.15222726 0.09230336 0.01795665 0.08516738 0.14187969 0.20027582
#> 97.5% Rhat n.eff
#> 0.39072013 1.18941364 17.00000000
#>
#> $delta
#> mean sd 2.5% 25% 50% 75%
#> delta[1,2] -0.28442898 0.3147612 -0.9006578 -0.4827189 -0.26572758 -0.10024047
#> delta[2,2] -0.29719831 0.2590276 -0.7792951 -0.4838425 -0.29130410 -0.13198102
#> delta[3,2] -0.54231866 0.2124651 -0.9742154 -0.6837604 -0.53234171 -0.37371609
#> delta[4,2] -0.46127117 0.1881528 -0.8174278 -0.5957306 -0.46633721 -0.33600621
#> delta[5,2] -0.48688760 0.2251093 -0.9496183 -0.6324472 -0.47637361 -0.33300461
#> delta[6,2] -0.40151260 0.2011238 -0.7947560 -0.5473189 -0.39166792 -0.26394642
#> delta[7,2] -0.45433234 0.1741296 -0.7606862 -0.5759122 -0.46886478 -0.34926381
#> delta[8,2] -0.43366463 0.1831372 -0.7778141 -0.5650650 -0.43013308 -0.31385183
#> delta[9,2] -0.47695396 0.1926814 -0.8643720 -0.6124117 -0.47216679 -0.33827852
#> delta[10,2] -0.18108308 0.3214342 -0.7240878 -0.4008857 -0.20186498 0.00325928
#> delta[11,2] -0.15597387 0.2557364 -0.6848432 -0.3185243 -0.14985584 0.01521530
#> delta[12,2] -0.21190952 0.2804416 -0.7112716 -0.4026130 -0.21260895 -0.04707008
#> delta[13,2] -0.99459429 0.4557138 -1.8336371 -1.3331123 -1.00505386 -0.64333966
#> delta[14,2] -0.07519043 0.2155349 -0.5220294 -0.2237173 -0.06218929 0.08425864
#> delta[15,2] -0.03541861 0.2530545 -0.5462876 -0.2093525 -0.03303859 0.14598360
#> delta[16,2] -0.33863032 0.1400900 -0.6077196 -0.4344279 -0.34478042 -0.24547445
#> delta[17,2] -0.46523190 0.2973549 -1.0834744 -0.6616990 -0.42806211 -0.25395483
#> delta[18,2] -0.43636450 0.3158704 -1.0498128 -0.6470153 -0.45009371 -0.22309480
#> delta[19,2] -0.42714718 0.3338410 -1.0705394 -0.6498909 -0.43166884 -0.21257240
#> delta[20,2] -0.43987476 0.2180946 -0.8851688 -0.5782707 -0.42436608 -0.29909366
#> delta[21,2] -0.54322793 0.2258040 -0.9946831 -0.6829649 -0.55104960 -0.39502898
#> delta[9,3] -0.52365850 0.2037486 -0.8991296 -0.6647160 -0.53421680 -0.39651514
#> delta[10,3] -0.38649300 0.3105555 -0.9451473 -0.6112106 -0.40092596 -0.16272337
#> delta[12,3] -0.29136843 0.3092900 -0.8434295 -0.5077735 -0.31971829 -0.09571349
#> delta[13,3] -0.75259451 0.4148189 -1.5890760 -1.0356738 -0.75368573 -0.44336172
#> delta[19,3] -0.52602748 0.2571075 -1.0893974 -0.6766997 -0.50406316 -0.34597961
#> delta[10,4] -0.50519188 0.2248995 -0.9549104 -0.6538936 -0.49656934 -0.33899812
#> delta[12,4] -0.38873592 0.2169376 -0.8042751 -0.5437097 -0.37847984 -0.24823089
#> delta[13,4] -0.18433795 0.2878630 -0.7663378 -0.3747777 -0.17963213 0.01882406
#> 97.5% Rhat n.eff
#> delta[1,2] 0.37517585 1.094862 29
#> delta[2,2] 0.25194870 1.081038 37
#> delta[3,2] -0.19060741 1.062078 48
#> delta[4,2] -0.08808433 1.113219 25
#> delta[5,2] -0.07591440 1.036874 69
#> delta[6,2] -0.01675318 1.031231 92
#> delta[7,2] -0.07395292 1.127750 25
#> delta[8,2] -0.07439817 1.025622 100
#> delta[9,2] -0.11874137 1.027732 97
#> delta[10,2] 0.54323299 1.099992 32
#> delta[11,2] 0.29466198 1.149110 18
#> delta[12,2] 0.40839404 1.088044 39
#> delta[13,2] -0.16293094 1.049210 46
#> delta[14,2] 0.30391670 1.108731 23
#> delta[15,2] 0.43025169 1.069246 36
#> delta[16,2] -0.05786824 1.066389 46
#> delta[17,2] 0.05221142 1.012943 950
#> delta[18,2] 0.19405717 1.012927 240
#> delta[19,2] 0.22066427 1.011404 250
#> delta[20,2] -0.02444730 1.021018 180
#> delta[21,2] -0.09738982 1.142030 20
#> delta[9,3] -0.10067753 1.134527 22
#> delta[10,3] 0.21806639 1.014053 320
#> delta[12,3] 0.36693085 1.005285 500
#> delta[13,3] 0.02164439 1.041318 56
#> delta[19,3] -0.09454191 1.031770 100
#> delta[10,4] -0.10717617 1.053725 55
#> delta[12,4] 0.04818428 1.035849 76
#> delta[13,4] 0.30962118 1.123358 21
#>
#> $heter_prior
#> [1] 0 1 1
#>
#> $SUCRA
#> mean sd 2.5% 25% 50% 75% 97.5%
#> SUCRA[1] 0.1021429 0.1105928 0.0000000 0.0000000 0.1428571 0.1428571 0.2857143
#> SUCRA[2] 0.8787143 0.2039180 0.2857143 0.8571429 1.0000000 1.0000000 1.0000000
#> SUCRA[3] 0.7411905 0.2682536 0.0000000 0.5714286 0.8571429 1.0000000 1.0000000
#> SUCRA[4] 0.3592857 0.2334013 0.0000000 0.1428571 0.2857143 0.4285714 0.8571429
#> SUCRA[5] 0.5083810 0.2883185 0.0000000 0.2857143 0.5714286 0.7142857 1.0000000
#> SUCRA[6] 0.2059524 0.1901228 0.0000000 0.0000000 0.1428571 0.2857143 0.7142857
#> SUCRA[7] 0.5968571 0.1824732 0.2857143 0.4285714 0.5714286 0.7142857 0.8571429
#> SUCRA[8] 0.6074762 0.1931094 0.2857143 0.4285714 0.5714286 0.7142857 1.0000000
#> Rhat n.eff
#> SUCRA[1] 1.041912 56
#> SUCRA[2] 1.033178 140
#> SUCRA[3] 1.005970 510
#> SUCRA[4] 1.033328 130
#> SUCRA[5] 1.020131 110
#> SUCRA[6] 1.047580 47
#> SUCRA[7] 1.038877 57
#> SUCRA[8] 1.083103 35
#>
#> $effectiveness
#> mean sd 2.5% 25% 50% 75% 97.5% Rhat n.eff
#> effectiveness[1,1] 0.000000000 0.00000000 0 0 0 0 0 1.000000 1
#> effectiveness[2,1] 0.601000000 0.48977440 0 0 1 1 1 1.002146 1200
#> effectiveness[3,1] 0.267000000 0.44246611 0 0 0 1 1 1.002200 1200
#> effectiveness[4,1] 0.006333333 0.07934306 0 0 0 0 0 1.065603 1100
#> effectiveness[5,1] 0.071666667 0.25797818 0 0 0 0 1 1.073060 97
#> effectiveness[6,1] 0.001333333 0.03649657 0 0 0 0 0 1.105330 3000
#> effectiveness[7,1] 0.014333333 0.11888061 0 0 0 0 0 1.119156 250
#> effectiveness[8,1] 0.038333333 0.19203172 0 0 0 0 1 1.035739 360
#> effectiveness[1,2] 0.000000000 0.00000000 0 0 0 0 0 1.000000 1
#> effectiveness[2,2] 0.210000000 0.40737614 0 0 0 0 1 1.012302 240
#> effectiveness[3,2] 0.342666667 0.47468024 0 0 0 1 1 1.008825 240
#> effectiveness[4,2] 0.040666667 0.19754973 0 0 0 0 1 1.077763 150
#> effectiveness[5,2] 0.111333333 0.31459690 0 0 0 0 1 1.055591 90
#> effectiveness[6,2] 0.007666667 0.08723775 0 0 0 0 0 1.017258 3000
#> effectiveness[7,2] 0.136333333 0.34319938 0 0 0 0 1 1.059394 72
#> effectiveness[8,2] 0.151333333 0.35843323 0 0 0 0 1 1.017646 220
#> effectiveness[1,3] 0.001000000 0.03161223 0 0 0 0 0 1.292018 1000
#> effectiveness[2,3] 0.071333333 0.25742373 0 0 0 0 1 1.020892 350
#> effectiveness[3,3] 0.098333333 0.29781446 0 0 0 0 1 1.004805 1100
#> effectiveness[4,3] 0.080333333 0.27185386 0 0 0 0 1 1.052768 120
#> effectiveness[5,3] 0.197000000 0.39779863 0 0 0 0 1 1.001464 2100
#> effectiveness[6,3] 0.019333333 0.13771666 0 0 0 0 0 1.021674 1200
#> effectiveness[7,3] 0.272000000 0.44506407 0 0 0 1 1 1.010155 240
#> effectiveness[8,3] 0.260666667 0.43907154 0 0 0 1 1 1.006520 370
#> effectiveness[1,4] 0.003000000 0.05469915 0 0 0 0 0 1.017624 3000
#> effectiveness[2,4] 0.042333333 0.20138208 0 0 0 0 1 1.021222 550
#> effectiveness[3,4] 0.103000000 0.30400955 0 0 0 0 1 1.023786 220
#> effectiveness[4,4] 0.122333333 0.32772502 0 0 0 0 1 1.013387 330
#> effectiveness[5,4] 0.132333333 0.33890929 0 0 0 0 1 1.010273 410
#> effectiveness[6,4] 0.050666667 0.21935267 0 0 0 0 1 1.019845 500
#> effectiveness[7,4] 0.301666667 0.45905788 0 0 0 1 1 1.010371 220
#> effectiveness[8,4] 0.244666667 0.42996106 0 0 0 0 1 1.010401 250
#> effectiveness[1,5] 0.017000000 0.12929258 0 0 0 0 0 1.119204 210
#> effectiveness[2,5] 0.033000000 0.17866628 0 0 0 0 1 1.053107 280
#> effectiveness[3,5] 0.075000000 0.26343525 0 0 0 0 1 1.047610 150
#> effectiveness[4,5] 0.245666667 0.43055354 0 0 0 0 1 1.074376 40
#> effectiveness[5,5] 0.170000000 0.37569542 0 0 0 0 1 1.002688 1100
#> effectiveness[6,5] 0.110333333 0.31335702 0 0 0 0 1 1.019785 250
#> effectiveness[7,5] 0.159333333 0.36604766 0 0 0 0 1 1.047260 80
#> effectiveness[8,5] 0.189666667 0.39210263 0 0 0 0 1 1.000677 3000
#> effectiveness[1,6] 0.118666667 0.32344978 0 0 0 0 1 1.024648 190
#> effectiveness[2,6] 0.023000000 0.14992829 0 0 0 0 0 1.044274 470
#> effectiveness[3,6] 0.054333333 0.22671205 0 0 0 0 1 1.004044 2100
#> effectiveness[4,6] 0.222000000 0.41566043 0 0 0 0 1 1.000996 3000
#> effectiveness[5,6] 0.139666667 0.34669865 0 0 0 0 1 1.015654 260
#> effectiveness[6,6] 0.232333333 0.42239085 0 0 0 0 1 1.022227 130
#> effectiveness[7,6] 0.100333333 0.30049402 0 0 0 0 1 1.019756 270
#> effectiveness[8,6] 0.109666667 0.31252591 0 0 0 0 1 1.230990 22
#> effectiveness[1,7] 0.409666667 0.49185417 0 0 0 1 1 1.019020 110
#> effectiveness[2,7] 0.013000000 0.11329289 0 0 0 0 0 1.068461 510
#> effectiveness[3,7] 0.026000000 0.15916169 0 0 0 0 1 1.001312 3000
#> effectiveness[4,7] 0.154666667 0.36164691 0 0 0 0 1 1.021690 170
#> effectiveness[5,7] 0.085333333 0.27942366 0 0 0 0 1 1.047831 130
#> effectiveness[6,7] 0.291333333 0.45445249 0 0 0 1 1 1.004023 570
#> effectiveness[7,7] 0.014333333 0.11888061 0 0 0 0 0 1.023919 1400
#> effectiveness[8,7] 0.005666667 0.07507619 0 0 0 0 0 1.235099 260
#> effectiveness[1,8] 0.450666667 0.49764322 0 0 0 1 1 1.042920 52
#> effectiveness[2,8] 0.006333333 0.07934306 0 0 0 0 0 1.032834 2300
#> effectiveness[3,8] 0.033666667 0.18039975 0 0 0 0 1 1.011319 1300
#> effectiveness[4,8] 0.128000000 0.33414550 0 0 0 0 1 1.115202 40
#> effectiveness[5,8] 0.092666667 0.29001309 0 0 0 0 1 1.023598 240
#> effectiveness[6,8] 0.287000000 0.45243699 0 0 0 1 1 1.053776 50
#> effectiveness[7,8] 0.001666667 0.04079759 0 0 0 0 0 1.293137 600
#> effectiveness[8,8] 0.000000000 0.00000000 0 0 0 0 0 1.000000 1
#>
#> $abs_risk
#> mean sd 2.5% 25% 50% 75%
#> abs_risk[1] 0.3916667 0.00000000 0.39166667 0.3916667 0.3916667 0.3916667
#> abs_risk[2] 0.2095835 0.07689002 0.09087526 0.1494869 0.1996166 0.2642581
#> abs_risk[3] 0.2493819 0.08037032 0.11361977 0.1922458 0.2393707 0.3030699
#> abs_risk[4] 0.3371851 0.06058315 0.23781338 0.2911815 0.3356545 0.3724691
#> abs_risk[5] 0.3080557 0.06304888 0.20616677 0.2619806 0.2988700 0.3487551
#> abs_risk[6] 0.3704115 0.05809658 0.25784679 0.3305663 0.3706576 0.4113375
#> abs_risk[7] 0.2886915 0.03487829 0.22452187 0.2627359 0.2865203 0.3150720
#> abs_risk[8] 0.2843882 0.03550909 0.22560045 0.2597372 0.2795239 0.3035755
#> 97.5% Rhat n.eff
#> abs_risk[1] 0.3916667 1.000000 1
#> abs_risk[2] 0.3780009 1.044520 51
#> abs_risk[3] 0.4218336 1.036897 63
#> abs_risk[4] 0.4748589 1.091470 31
#> abs_risk[5] 0.4467485 1.008956 410
#> abs_risk[6] 0.4796027 1.111387 23
#> abs_risk[7] 0.3535068 1.043089 60
#> abs_risk[8] 0.3692168 1.179818 17
#>
#> $base_risk
#> [1] 0.3916667
#>
#> attr(,"class")
#> [1] "run_model"
# }