MGA Significance

Questions about the implementation and application of the PLS-SEM method, that are not related to the usage of the SmartPLS software.
Post Reply
Sophie
PLS Junior User
Posts: 3
Joined: Tue Jul 23, 2019 11:02 am
Real name and title: Sophie Bartsch

MGA Significance

Post by Sophie »

Hi,

I have done a MGA for two age groups and wonder how I can interpret my results.
1) How is it possible that the Bootstrapping results show insignificance for both groups (and there is insignificance for the difference as well) but there is a significant relationship when I run the PLS algorithm with the complete data? I have only 3 left out cases and groups with n=123 and n=124.

2) The confidence intervals overlap and the Welch-Satterthwait Test also signals insignificance, but in the Bootstrapping results I find that the assumed relationship is only significant (according to the p-value) for one of the two groups.
I guess that I have to conclude that there is no significant difference, but can I still interpret that the significant effect (which exists when I run the PLS with the whole data set) stems only from one of the groups? It somehow sounds contradictory to me.

Thanks for your help!
Sophie
jmbecker
SmartPLS Developer
Posts: 1282
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: MGA Significance

Post by jmbecker »

1) You may simply have insufficient power in both groups because you only have the half sample size. Especially if the effects are quite similar in both groups power is the most likely reason.

2) Again you may have power problems in one of your groups that results in insignificant effects. In addition, there is a principle in statistics that says that the difference between significant and insignificant must not be itself significant. Significance itself is a questionable concept although it is widely used. Even though its usage suggest a dichotomous nature (something is either significant or insignificant) the underlying statistics are not at all this way. You simply choose an arbitrary threshold (usually 5% for the p-value) to categorize effects. However, one effect might have p-value 0.049 and the other 0.051. You would categorize them differently although they are barely distinguishable.

You may also find this interesting:
https://www.nature.com/articles/d41586- ... yfw-_ZKnTQ
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Post Reply