MGA Result Discrepancies

Questions about the implementation and application of the PLS-SEM method, that are not related to the usage of the SmartPLS software.
Post Reply
Ronja
PLS Junior User
Posts: 6
Joined: Tue Aug 04, 2015 7:51 pm
Real name and title: Jennifer Rothman

MGA Result Discrepancies

Post by Ronja »

Hello everyone,

I would really appreciate if someone could share their wisdom with me!

I am running a multi group analysis and keep getting different results. More importantly, I am unable to consistently get significant results. I read on the blog that anything over 0.95 or under 0.05 is significant, but does this apply to both t and p values? I was under the impression that ONLY p values under 0.05 or 0.10 were significant but noticed that some of the p values that I considered insignificant were actually over 0.95. I even copied the same file using the duplicate file option (when you right click on a data file) in my path model and got the same results as far as the PLS logarithm (i.e. path coefficients and loadings) but not when I ran the MGA. I am using all default settings with 500 iterations, but did also try it with 5000.

I attached several versions of the same file which give different results. The file I get the best results with is the master_test run V4... There are two groups which represent resistance results prior to and after implementation.

P.S. I am using PLS 3.2.1 professional edition.
I really appreciate any feedback. I am writing my master thesis and am a bit lost!

Jennifer
Attachments
trial4.zip
(28.86 KiB) Downloaded 407 times
janschreier
PLS Expert User
Posts: 116
Joined: Fri Sep 12, 2014 2:12 pm
Real name and title: Jan Schreier

Re: MGA Result Discrepancies

Post by janschreier »

Hi Jennifer,

If I understand your question correctly you should carefully read about t-tests. Maybe these two links help:
http://matheguru.com/stochastik/t-test.html
https://de.wikipedia.org/wiki/Zweistichproben-t-Test
(from your name I guessed German is okay ;)

for your question, p-values below 0,05 and above 0,95 are considered significant differences. The reason why MGA does not produce the same results over and over again is the bootstrapping procedure it uses: https://de.wikipedia.org/wiki/Bootstrap ... atistik%29

Hope these links help as a starter but I guess someone else may add to my suggestions.

BR, Jan
Ronja
PLS Junior User
Posts: 6
Joined: Tue Aug 04, 2015 7:51 pm
Real name and title: Jennifer Rothman

Re: MGA Result Discrepancies

Post by Ronja »

Hi Jan,

Thank you so much for your prompt reply! Ich verstehe Deutsch, aber bin Amerikanerin. :)

So why would the p test results occasionally be under .05 or .10 (@ the 5% or 10% signficance level) and then above .95 or .9 when using the same results? I thought I was getting significant and unsignificant results but in actuality they were always significant just different sides of the distribution.

Also, this might be confusing, but I noticed that the my original attempt (with file A) was always under .05/.10 so on the lower tail, and when I copied it (exact copy of file A) and ran it again the p values were always over .95/.90 so the upper tail. If the answer to my above question is due to randomness, then why would there be this "pattern"?

Thanks again.
Jennifer
janschreier
PLS Expert User
Posts: 116
Joined: Fri Sep 12, 2014 2:12 pm
Real name and title: Jan Schreier

Re: MGA Result Discrepancies

Post by janschreier »

oops, sorry for that but your answered clarified some bits for me. Did you maybe switch the order of groups between your tests? e. g. in one run you had the 76 cases as group A and the 37 cases as group B and vice versa in the second test? This would create the pattern you mentioned with p-values once being above 0,95 and then again below 0,05 for the same relation.
janschreier
PLS Expert User
Posts: 116
Joined: Fri Sep 12, 2014 2:12 pm
Real name and title: Jan Schreier

Re: MGA Result Discrepancies

Post by janschreier »

Hi Jennifer,

I had another look into your data because I want to understand the reasoning behind that (if p < 0.05 or p > 0.95) rule. That is explained well in this video:
https://youtu.be/l0TfK4z12D8?t=161
the slide shown at this second of the video is all one needs to read to understand the logic behind the test.

On my search for this answer I found another thing in your data that I find strange. Look into the PLS Bootstrapping results for "Organizational Support -> Resistance". For the group "1AB and 2B" the original path coefficient is 0.379. The mean path coefficient from all bootstrapping runs is -0.128 so there is a huge standard deviation.

What I now don't get is why the PLS-MGA shows a p-value of 0.028 which would lead to the conclusion that the difference is significant while the confidence intervals of both groups overlap and thus would not lead to the conclusion that there is no sig. difference. But then for "1AB and 2B" the AVE for organizational support is below 0.5 (for some other values as well). Before you conduct an MGA you have to verify that both groups individually comply with the general model requirements. this is not the case which might be another reason you get these weird results.

HTH, Jan
jmbecker
SmartPLS Developer
Posts: 1282
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: MGA Result Discrepancies

Post by jmbecker »

1)
You should also be careful, that the above 0.95 and below 0.05 rule only applies for the "PLS-MGA" results, but not for all results that you get from a Multi-Group-Analysis (MGA).
For all other results that are provided by the MGA procedure (e.g., Bootstrapping Results, Parametric Test, Welch-Satterthwait Test) the normal p < 0.05 rule applies.

2)
In addition, you sample size are quite small. I would not engage in multi group comparisons with such small sample sizes.

3)
Probably related to point 2, is the fact that Jan already mentioned: your data for group 1 (which is very small) seems to be inappropriate for a separate analysis.
- You have large biases in the bootstrap distributions (that generally do not look very good; see histograms in the bootstrap procedure).
- Some of your constructs don’t meet the measurement model quality requirements.
- etc.
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Ronja
PLS Junior User
Posts: 6
Joined: Tue Aug 04, 2015 7:51 pm
Real name and title: Jennifer Rothman

Re: MGA Result Discrepancies

Post by Ronja »

Hi Jan, Hi Jim,

Wow you guys are really a great help, THANK YOU!!

Okay so now I understand that because I randomly selected which group was A and which was B, I occasionally got p-values above .95/.9 or below .05/.10.
I understand that this rule only applies for the PLS-MGA.

Just to make sure I understand the other tabs:

1. the Bootstrapping Results tab from the MGA only provides information on individual group significance and not group difference significance, i.e. it gives you the coefficients, means, t-values and p-values. Is the STERR the standard deviation?

2. Three of the tests, PLS-MGA, Parametric and Welch-Satterthwait, provide group difference significance results. From what I understand the Welch-Satterthwait is the most conservative.

3. The confidence intervals only tell you the values within which 95% of the data falls.

I know that my groups are small, specifically the one group with 37, but I was unable to get more survey results. I have spoken to my thesis adviser and he said to go ahead with the analysis because at this point I have no alternative. The model as a whole (both groups) does meet the measurement model quality requirements but not the individual groups, this I will of course have to mention in my analysis.

Jim, where are you seeing the histograms in the bootstrap. I know when I run the PLS-Algorithm, the AVE and composite reliability have histograms but not when you run the MGA. What is the difference between the e.g. AVE results from the PLS algorithm and the MGA? Doesn't the MGA provide all the same information as the PLS algorithm and bootstrap with the additional group differences?

Jan, where are you seeing the mean path coefficient from all the bootstrapping runs (-0.128)? I see on the bootstrapping tab that they give means for each group but not for both groups overall. When you said that,

"What I now don't get is why the PLS-MGA shows a p-value of 0.028 which would lead to the conclusion that the difference is significant while the confidence intervals of both groups overlap and thus would not lead to the conclusion that there is no sig. difference."

Can you maybe explain where the confidence intervals overlap and what this means? From what I can see, the CI overlaps from -.744 to -.333.

I know I have asked a lot of questions here, but you guys are such a huge help. Thanks again!!!

Btw. I live in Switzerland just in case we are on different time zones!

Best Jennifer
janschreier
PLS Expert User
Posts: 116
Joined: Fri Sep 12, 2014 2:12 pm
Real name and title: Jan Schreier

Re: MGA Result Discrepancies

Post by janschreier »

Hi Jennifer,

sorry, I missed the file upload. Attached you find my Multigroup-Analysis results of your model. There you find the average -0.128. As it's the nature of bootstrapping the value of any other run will be slightly different. The huge difference between the two yellow cells in row 5 will most likely still occur with your data.

Towards your questions:
1: Yes, no group differences here but as I said in my earlier post there is a huge difference between the original result from your model and the mean of all bootstrapping runs. STERR should be Std dev. but Jan-Michael knows this for sure.
2: you can also check the confidence intervals of both group. If they don't overlap, there is a sig. diff.
3: you can set the alpha-level in the parameters before running the MGA, but apart from that you are right.

BR, Jan

PS: AFAIK there is no timezone gap between Munich and any part of Switzerland but then Bavaria and Switzerland are somewhat special ;)
Attachments
Mappe8.zip
(7.06 KiB) Downloaded 345 times
Ronja
PLS Junior User
Posts: 6
Joined: Tue Aug 04, 2015 7:51 pm
Real name and title: Jennifer Rothman

Re: MGA Result Discrepancies

Post by Ronja »

Hi Jan,

Good to know we are on the same time zone ;)

So how would you interpret the fact that I get significant results (0.028) for organization support>resistance but that the path coefficient is so far away from the mean and that the CI overlap? Of course I want significant results since my whole thesis is about the difference between primary and secondary adoption. Could I just report that it is significant however there were some problems i.e. the CI overlap and the distance from the mean?

Just to be sure, the original coefficient .379 is the estimated parameter from the running PLS-algorithm correct? and the -.128 is the average of all the (500) bootstrapping results? When exporting results from Smart PLS, would you recommend exporting the PLS algorithm and MGA results but not the bootstrapping results because bootstrapping for individual groups and group differences are already included in the MGA results? Sorry so much info I am trying to decided what is and is not important :)

Kindly,
Jennifer
janschreier
PLS Expert User
Posts: 116
Joined: Fri Sep 12, 2014 2:12 pm
Real name and title: Jan Schreier

Re: MGA Result Discrepancies

Post by janschreier »

Hi Jennifer,

you are right with the .379 and -.128. When you talk about "exporting" I guess you mean "how do I report my results?" Then there is a chapter "How to Write Up and Report PLS Analyses" in Vinzi, V.E., Chin, W.W., Henseler, J., Wang, H., 2010. Handbook of Partial Least Squares: Concepts, Methods and Applications: Concepts, Methods and Applications in Marketing and Related Fields. Springer, Berlin ; New York.

Or you look for recent publications of J-M Becker, C. Ringle, J. Henseler. Follow them on researchgate.net to see what's new and how they publish their study-results.

As Jan-Michael wrote, the problem of this big gap is the small number of cases and that these cases seem to be not very homogenous. So I would also say you do not really get significant results (0.028) because your data violates requirements that most be held in order to test for significance.
I don't know a workaround other than trying to get more data or being very vague on the findings. Stating that one did not manage to compare groups is also a finding (even if it's not very satisfactory).

br, Jan
jmbecker
SmartPLS Developer
Posts: 1282
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: MGA Result Discrepancies

Post by jmbecker »

When you run the normal bootstrapping procedure you will get some additional results like the histograms of your bootstrap distribution (yes, MGA is based on the bootstrapping procedure but gives you a different set of results).
There you can see that the distributions for groups 1 are all very odd (skewed, non-normal, etc.). Skewed bootstrap distributions are also the reason for the large bias (i.e., difference between original estimate and bootstrap mean). Hint: use 5000 resamples to get nice histograms.
As said before, because the results for your group 1 are not valid, you should not compare them in an MGA. You might not be happy to hear that, but everything else would just give you wrong results and hence misleading conclusions.

Either you have to collect more data for group 1 or stick with an analysis on the aggregate sample.
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Ronja
PLS Junior User
Posts: 6
Joined: Tue Aug 04, 2015 7:51 pm
Real name and title: Jennifer Rothman

Re: MGA Result Discrepancies

Post by Ronja »

I have to apologize Jan-Michael, somehow my eyes saw jmbecker and my brain understood your name to be Jim..

So when running the normal bootstrapping procedure, I see Histograms with links underneath for Path Coefficients, Indirect Effects and Total Effects. Am I to understand that in order to have valid groups the distributions of all the aforementioned histograms have to be normally distributed? How can you tell this? Purely by looking through those histograms? I thought that PLS-SEM made no assumptions about data distributions.

I didn't notice that the before GO LIVE group was more skewed than the complete or after GO-LIVE groups when looking at the path coefficient histogram. Is this what I should be looking at? Are the other two groups normal "enough"?

Also what is the difference between e.g. looking at AVE from the PLS-Algorithm and the bootstrap. Is it that the pls-algorithm runs once and gets one result vs. the bootstrap runs 500 times and then gives you 500 AVE results for each construct as well the p value of each?

There is almost no possibility of me getting more before GO-LIVE (group 1) survey results so I will need to come up with another solution.

Since I figured out my original p-value problem with the different files, I uploaded another file which is better labeled.

Thanks again to both of you. I will try to get the literature you recommended as well!

Best,
Jennifer
Attachments
Structual Model.zip
(5.11 KiB) Downloaded 341 times
Ronja
PLS Junior User
Posts: 6
Joined: Tue Aug 04, 2015 7:51 pm
Real name and title: Jennifer Rothman

Re: MGA Result Discrepancies

Post by Ronja »

Hi again,

So the switching costs > perceived value is significant (p = 0.087) and the original path coefficients (-0.256 and 0.089) are closer to the mean path coefficients (-0.25 and -0.166). The CI also don't overlap, could this be interpreted as significant even though the significance of organizational support >resistance is doubtful?

Thanks,
Jennifer
Post Reply