Page 1 of 1

Sample size

Posted: Wed Mar 15, 2006 9:33 am
by dbarron
I am just learing about PLS in general and SmartPLS in particular (and thank you to its developers, by the way!). I have a question about sample size. I have noted that PLS is often recommended in the literature as an alternative to covariance structure models when sample sizes are small. I have also seen at least one example of it being used with a very small sample (I think it was 10 cases). Still, I am uneasy about this, and I wonder if there is any guidance about appropriate minimum sample sizes for given numbers of latent variables or path coefficients.

Thanks in advance.

David Barron
Jesus College

Posted: Wed Mar 15, 2006 2:43 pm
by dbarron
I found a rule of thumb that suggested a sample size of 10 times either the largest number of formative indicators of a LV, or the largest number of paths leading to a LV, whichever is the larger.

Posted: Wed Mar 15, 2006 5:26 pm
by awill
You can find this rule of thumb in (Chin and Newsted, 1999) on pages 326-327 (viewtopic.php?t=19).

Posted: Wed Mar 15, 2006 5:46 pm
by dbarron
Thanks very much.

Posted: Sun Apr 02, 2006 2:52 pm
by stefan0603
Hi all,

back to the "Rule of Thumb" issue:

does anyone have any experience with exceeding such rule? What impact does it have on the results if the number of the cases is much less than i.e. the number of exogenous variables loading on one endogenous variable?

In my study, I could split the exogenous variables into two groups and analyze the model separately. However, in that case the measurement models of the endogenous variables change?

Does anyone have an idea on such aspects?

Many thanks


Posted: Tue Apr 04, 2006 5:32 am
by stefanbehrens
Hi Stefan,

first of all, Chin's rule of thumb is only just that - a rule of thumb. If you want to dig deeper into sample size requirements, you have to do the following:
1) Define your structural/ measurement models
2) Form hypotheses about effect sizes of your predictors
3) Define an acceptable power level for your model (typically 80%)

You can then exactly back out the sample size required to achieve your desired power level (3) given the hypothesized effect sizes (2) and the regression equations (1) using formulas and tables developed by [Cohen 1988].

If your study doesn't have the necessary sample size, your statistical power goes down. If power is too low (<50%), you cannot validly conclude from a nonsignificant path to a nonexistent effect. Thus, you can then only interpret the significant paths in your model, but no longer the nonsignificant ones.

By the way, splitting your model isn't really the answer either if it means excluding important predictors from either of the resultant sub-models. You will get biased estimates for your path coefficients as a result and will be prone to report artifacts. This is particularly critical, if there are important correlations between your predictors.

Hope this helps.

Cohen, J. Statistical Power Analysis for the Behavioral Sciences (2nd ed.), Lawrence Erlbaum Associates, Hillsdale, NJ, 1988.