This dissertation consists of two studies investigating model and prior specification issues in the context of Bayesian structural equation modeling (SEM). Two of the major advantages of Bayesian estimation for SEM are that complex models can more easily be estimated, and prior information can be directly included in the analysis. Two aspects of Bayesian estimation of SEM that are important for the applied researcher are model and prior specification assessment. In Study 1 of this dissertation, I examined the ability of several model fit and selection indices to detect model misspecification in two commonly used SEMs with data that are completely observed or that contain missing values. Simulation results showed that Bayesian approximate model fit indices may not be appropriate for model fit assessment of a single model. The posterior predictive p-value was more likely to detect model misspecification, although it was sensitive to sample size and the presence of missing values. Instead of focusing on a single model, researchers should aim to compare multiple models and focus on model selection, using Bayesian approximate fit indices, in addition to model fit assessment. Furthermore, informative priors that diverge from the population model worsened model fit even for a correctly specified model. Thus, researchers should examine whether there is disagreement between the priors and their observed data when using informative priors. This so-called prior-data disagreement was the focus of Study 2 of this dissertation. In this study, I examined three indices for detecting prior-data disagreement, the Data Agreement Criterion (DAC), Bayes Factor (BF), and prior-predictive p-value, and assessed their ability to detect diverging priors across 4 sample sizes and 49 prior specifications for the mean intercept and linear slope parameters of a latent growth model. Simulation results showed that while the DAC was easily implemented, it cannot assess interactions between priors placed on different parameters. Use of the BF becomes unfeasible as model complexity, sample size, or the number of prior specifications examined increase. Here, the prior-predictive p-value may offer an alternative, although it may not be appropriate for prior specifications that are partially or fully diffuse. Furthermore, all prior-data disagreement indices tended to be better at detecting disagreement with larger sample sizes, whereas the impact of disagreement is largest with small sample sizes. Other implications, suggestions for applied researchers, limitations, and future directions are also discussed.
Advisor
Author