consistent PLS Bootstrapping

Questions about the implementation and application of the PLS-SEM method, that are not related to the usage of the SmartPLS software.
Post Reply
Sebastian
PLS Junior User
Posts: 1
Joined: Wed Feb 25, 2015 10:45 am
Real name and title: Dr. Sebastian Forkmann

consistent PLS Bootstrapping

Post by Sebastian »

Dear SmartPLS Forum,

I just upgraded to SmartPLS3 and are working my way around the new consistent PLS which I would like to use to test a structural model. However, when running the consistent PLS bootstrapping to obtain the t-values I receive question marks in black squares in parts of the output. This seems to relate to distinct elements of the model, i.e. item loadings for certain constructs and the associated structural paths. When running the same model with the “regular” bootstrapping this issue does not appear.

Thank you in advance for your help!

All the best,

Sebastian
jmbecker
SmartPLS Developer
Posts: 1284
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: consistent PLS Bootstrapping

Post by jmbecker »

Yes, that sometimes happens with the PLSc Bootstrapping.
Until now, there is no established way to treat these cases. It happens when part of the correction value gets negative and the square root is taken of these negative values.
It is a bit similar to Heywood cases in CB-SEM where you can get negative variances.

A possible way to treat them might be to copy the "Samples" to excel and delete those rows that resemble these strange results.

However,
1) It provides some information that something with your model is not working well. You should further investigate why these strange results occur. One possible option is, that one of the constructs is not best represented as a factor (reflective).
2) You should report such a procedure and especially how many cases were bad. I have seen models where only one out of 1000 samples was bad (probably not a big problem, just bad luck) and others where up to 20% were bad (indication of a more serious problem).
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
JGettinger
PLS Junior User
Posts: 7
Joined: Thu Aug 22, 2013 10:50 am
Real name and title:

Re: consistent PLS Bootstrapping

Post by JGettinger »

Dear all,

today I have encountered a very similar problem. Using the standard PLS algorithm, my model (see attachment) works well and results in meaningful insights. However, using PLSc results to problems (indicated by black ? within the samples). The amount of problematic samples with PLSc-procedure is between 1% and 2% of all calculated samples. After several re-runs the problem still exists.

It was proposed to export these samples to Excel and to delete the problematic samples. However, how do I continue working with these samples in SmartPLS 3? How can I indicate (when importing the cleaned Excel file) to the software to these are already bootstrapped samples?

Thank you in advance for your kind answers,
Johannes
Attachments
Model.png
Model.png (59.41 KiB) Viewed 12718 times
jmbecker
SmartPLS Developer
Posts: 1284
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: consistent PLS Bootstrapping

Post by jmbecker »

If you delete the problematic samples in Excel, you also have to calculate T-Values, P-Values and Confidence Intervals on your own. It is not possible to import bootstrapped samples back into SmartPLS.

However, as I said before: You should not just ignore the fact that the model produces such results, but act accordingly. Try to figure out which construct is the reason of the problem by specifying each construct as formative (no correction will be applied). You should then investigate if that construct is really been best represented by a factor model (PLSc reflective correction) or better as a composite. Thereby, you improve your model and your findings.

If it leaks into your room, you can either dry the floor or fix the roof. What do you prefer?
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
JGettinger
PLS Junior User
Posts: 7
Joined: Thu Aug 22, 2013 10:50 am
Real name and title:

Re: consistent PLS Bootstrapping

Post by JGettinger »

Dear Dr. Becker,

thank you for your kind answer!
Going for the roof, I could isolate the problem being beased in the three "Pre-Conflict" latent variables (the ones on the very left on the illustrated model in the prior attachment). I could run PLSc without these three latent variables succesfully and also using these three latent variables using formative - instead of reflective - indicators. However, there are two aspects that make me wonder.
1) Even though Dijkstra&Henseler (2015, MISQ) mention that PLSc increases the R² of explained latent variables, the difference between PLS and PLSc (in my case for 2 of the 3 Post-Conflict latent variables on the very right of the model) is 15% reaching levels high as 87.5%. While this might seem great, this level almost seems unrealistic from a theoretical perspective.
2) The questions on which these 3 Pre-Conflict indicators are based (the ones on the very left on the illustrated model in the prior attachment), are also used for the three "Post-Conflict" latent variables (the ones on the very right on the illustrated model in the prior attachment) - using reflective indicators without any problems. From a theoretical perspective I would strongly opt for reflective measures for all latent variables.
According to Wold (1980), PLS is able to handle dependency of observations. As here people answered a questionnaire (including the same questions) before and after a treatment, might this dependency of observations be the source of the problem?

Thank you in advance for your kind answers,
Johannes
jmbecker
SmartPLS Developer
Posts: 1284
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: consistent PLS Bootstrapping

Post by jmbecker »

Yes, I could imagine that the dependency between pre and post measurement could be a problem for PLSc. PLSc utilizes more information from the whole model. While PLS is a limited information estimator (using only the partial models for estimation of model parameters) PLSc is using information from the complete covariance matrix of indicators in the correction process. This could be a problem with the strong dependence between pre and post measurement.
It would be interesting to ask Theo Dijkstra on his opinion, regarding the issue. PLSc is a very nascent approach and one needs to be aware that it is not yet tested and established in all modeling situations.

The extremely high R² also points to very large corrections. Usually constructs with few (weak) indicators are part of the problem. What are the constructs with low rho_A?
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
JGettinger
PLS Junior User
Posts: 7
Joined: Thu Aug 22, 2013 10:50 am
Real name and title:

Re: consistent PLS Bootstrapping

Post by JGettinger »

Thank you for your reply!

The composite reliability of the three problematic PRE-LVs are: 0.867, 0.822, and 0.893 and using PLSc the respective values are: 0.695, 0.306, and 0.831.
Wow, that is a huge difference in the composite reliability for the second LV. Indeed, one indicator shows a small outer loading using PLSc (0.115), but not using (normal) PLS (0.595). I had not detected this difference. Now it is clear, that this indicator is problematic and I need to handle that - most likely delete it. Can you as an expert also draw other implications?
Thank you very much for your helpful comments! You have really helped me to go for the differences between PLS and PLSc!

All the best,
Johannes
JGettinger
PLS Junior User
Posts: 7
Joined: Thu Aug 22, 2013 10:50 am
Real name and title:

Re: consistent PLS Bootstrapping

Post by JGettinger »

A question linked to that:
Using PLSc and connecting all LVs for the initial calculations results in higher loadings for the indiactors. However, at the same time one R² is consistently missing in the results for one LV. In the model this is the ComClar that is explained by the three (problematic) LVs. What might be the reason for the missing R²?

Thank you in advance for any suggestions!

All the best,
Johannes
Larissa
PLS Junior User
Posts: 3
Joined: Thu May 26, 2016 12:12 am
Real name and title: Larissa Statsenko

Re: consistent PLS Bootstrapping

Post by Larissa »

Dear Johannes,

Thank you for raising the issue, I have encountered the same problem.
Would you please advise if there has been any solution?

Thank you,
Larissa
Post Reply