Dear experts,
If PLS SEM calculates the confidence intervals for indirect effects, it offers them both with and without bias correction.
What surprises me is that the bias corrected confidence intervals do not add up to the normal confidence interval plus the bias, as I would expect.
I have included a copy of the confidence intervals from the example model for corporate reputation (but I notice the same in my own data set).
For example, for the bias corrected confidence interval of ATTR-->CUSA I would expect [0.025+bias of -0.002; 0.144+bias of -0.002] = [0.023;0.142], yet this is not the outcome (also: positive biases seem to lead to lower bias corrected confidence intervals, while negative biases lead to higher bias corrected confidence intervals).
Am I missing something?
Confidence Intervals
Original Sample (O) Sample Mean (M) 2.5% 97.5%
ATTR -> CUSA 0,085 0,084 0,025 0,144
ATTR -> CUSL 0,101 0,099 0,027 0,172
COMP -> CUSL 0,074 0,075 0,007 0,146
CSOR -> CUSA 0,086 0,087 0,034 0,142
CSOR -> CUSL 0,105 0,106 0,041 0,174
LIKE -> CUSL 0,220 0,219 0,152 0,289
PERF -> CUSA 0,094 0,096 0,018 0,170
PERF -> CUSL 0,089 0,092 0,003 0,177
QUAL -> CUSA 0,228 0,231 0,161 0,304
QUAL -> CUSL 0,248 0,253 0,170 0,339
Confidence Intervals Bias Corrected
Original Sample (O) Sample Mean (M) Bias 2.5% 97.5%
ATTR -> CUSA 0,085 0,084 -0,002 0,030 0,148
ATTR -> CUSL 0,101 0,099 -0,002 0,033 0,178
COMP -> CUSL 0,074 0,075 0,001 0,007 0,145
CSOR -> CUSA 0,086 0,087 0,001 0,034 0,142
CSOR -> CUSL 0,105 0,106 0,001 0,040 0,173
LIKE -> CUSL 0,220 0,219 -0,001 0,156 0,293
PERF -> CUSA 0,094 0,096 0,002 0,014 0,166
PERF -> CUSL 0,089 0,092 0,002 -0,002 0,173
QUAL -> CUSA 0,228 0,231 0,003 0,156 0,300
QUAL -> CUSL 0,248 0,253 0,005 0,163 0,331
Bias correcting confidence intervals
-
- SmartPLS Developer
- Posts: 1284
- Joined: Tue Mar 28, 2006 11:09 am
- Real name and title: Dr. Jan-Michael Becker
Re: Bias correcting confidence intervals
It depends on which confidence interval method you use.
The percentile approach and the BCa approach do not simple add (or subtract) the bias. Instead they count the number of subsamples that are smaller than the original estimate and transform this number via the normal distribution into a correction that is added (or subtracted) from the original quantiles. In addition, the BCa method accounts for some skewness in the distribution via the acceleration parameter.
Only the studentized method simply adds (subtracts) the bias to the interval bounds.
The percentile approach and the BCa approach do not simple add (or subtract) the bias. Instead they count the number of subsamples that are smaller than the original estimate and transform this number via the normal distribution into a correction that is added (or subtracted) from the original quantiles. In addition, the BCa method accounts for some skewness in the distribution via the acceleration parameter.
Only the studentized method simply adds (subtracts) the bias to the interval bounds.
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Re: Bias correcting confidence intervals
Dear Mr Becker,
Thank you for your reply. That would explain.
I do notice that the article by Nitzl et al., (2016) also adds the bias to the percentile confidence interval, so maybe it is possible for that method as well?
"Another problem often occurs when the mean of the bootstrapped distribution (i.e., sample mean in most applications of the software tools (M)) for the indirect effect aM × bM is not equal to the estimated indirect effect (i.e., original sample in most of the software tools (O)) aO × bO (Chernick, 2011). As a result, researchers must correct for this bias in PLS, which can be accomplished by calculating the difference between the estimated indirect effect aO × bO from the path model and the mean value of the indirect effect aM × bM from the bootstrap sample. Consequently, the bias-corrected ci% confidence interval for an indirect effect a × b can be defined as:
[k*(.5-c%/2))th+(a0*b0-aM*bM);((1+k*(.5+c%/2))th+(a0*b0-aMxbM)]"
Thank you for your reply. That would explain.
I do notice that the article by Nitzl et al., (2016) also adds the bias to the percentile confidence interval, so maybe it is possible for that method as well?
"Another problem often occurs when the mean of the bootstrapped distribution (i.e., sample mean in most applications of the software tools (M)) for the indirect effect aM × bM is not equal to the estimated indirect effect (i.e., original sample in most of the software tools (O)) aO × bO (Chernick, 2011). As a result, researchers must correct for this bias in PLS, which can be accomplished by calculating the difference between the estimated indirect effect aO × bO from the path model and the mean value of the indirect effect aM × bM from the bootstrap sample. Consequently, the bias-corrected ci% confidence interval for an indirect effect a × b can be defined as:
[k*(.5-c%/2))th+(a0*b0-aM*bM);((1+k*(.5+c%/2))th+(a0*b0-aMxbM)]"
-
- SmartPLS Developer
- Posts: 1284
- Joined: Tue Mar 28, 2006 11:09 am
- Real name and title: Dr. Jan-Michael Becker
Re: Bias correcting confidence intervals
I know that Nitzel et al are doing it this way, but it is not the most common approach that you find in the original textbooks by Efron & Tibshirani 1993, Davidson & Hinkley 1999, or Chernick 2007.
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de