Generalizability & Ethical Research

Questions about the implementation and application of the PLS-SEM method, that are not related to the usage of the SmartPLS software.
Post Reply
rblair1
PLS Junior User
Posts: 3
Joined: Wed Dec 20, 2017 4:19 am
Real name and title: Ryan Blair

Generalizability & Ethical Research

Post by rblair1 »

Hi,

I've conducted a study involving a small sample (n = 46, originally N = 58, but after removing heritage language learners and participants with missing data, it dropped the count to 46). Based upon this, I created a PLS-SEM model that consists of 5 latent variables with a reflexive outer model (indicator variables pointing outward). The greatest number of pathways pointing towards a latent variable was 4 (thus, the sample size met minimal requirements based on the x10 rule, but not perhaps by more strict standards). In terms of a brief evaluation of the model (inner and outer), the outer model showed relative strength with respect to: (1) indicator loadings (out of 14 indicators, none below .78); (2) indicator reliability (4 were below .70 but none below .60); (3) Cronbach's alpha (out of 4 latent variables, alpha was .78, .80, .88, and .92); (4) composite reliability (4 latent variables between .87 and .94); and (5) AVE (all four latent variables above .70, except 1 at .69). Fornell-Larcker criterion and HTMT ratio were good for all of the latent variables as well. The inner model showed that Q^2 was good for all latent variables except one, suggesting that one latent variable wasn't predictable. Certain effect sizes (e.g., f^2 values) were not statistically significant but visible, indicating a small sample that lacked certain statistical power but still revealing of certain effects (i.e., a noted limitation). The SRMR value for an overall goodness-of-fit score was 0.10, a little on the high side but supposedly acceptable by some standards.

I was hoping to publish the study that includes this model in a journal that seeks generalizability of results. Considering the above, do you think it may be possible to do so, and perhaps note the sample size as a limitation (it would be for an applied linguistics journal, so perhaps it could be noted as an exploratory study or preliminary results)? Or, would it be unethical to do so?

Thanks,
Ryan
jmbecker
SmartPLS Developer
Posts: 1284
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: Generalizability & Ethical Research

Post by jmbecker »

It is unethical if you disguise weaknesses or manipulate data and results.
If you fully report your model, your data and all steps & decisions towards the final dataset as well as note all weaknesses in terms of generalizability, it is not unethical to try to publish the study. It might be a waste of reviewer time if you yourself believe that the study has no value. However, if you honestly think that there is some value in your findings, it is upon the reader (and particularly the reviewer) to judge whether there is utility in your study or not, because they have all information revealed.
Certainly, statistical power is a problem for your model. In addition, I would be careful with interpreting strong effects that are not significant. It can be lack of power, but it can also be an artifact of sampling variance or chance correlations.
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Post Reply