P Value in Confirmatory Tetrad Analysis (CTA)

PLS is broadly applied in modern business research. This forum is the right place for discussions on the use of PLS in the fields of Marketing, Strategic Management, Information Technology etc.
Post Reply
skono
PLS Junior User
Posts: 8
Joined: Wed Aug 17, 2016 4:02 pm
Real name and title: Shintaro Kono, Ph.D Candidate

P Value in Confirmatory Tetrad Analysis (CTA)

Post by skono » Tue Oct 31, 2017 5:37 pm

Hello all,

I have a paper in which I used confirmatory tetrad analysis (CTA) with SmartPLS 3. When I specified statistical parameters for the analysis, I followed Gudergan et al.'s (2008) practice, and used p = .10 as the significant value. However, one of my reviewers questioned this, and asked for a rationale. I revisited Gudergand et al., but they did not explain the reason why they used .10 instead of (conventional) .05 (and yes, don't get me started on the problem of blindly adopting .05). Does anyone know why they chose .10? Any insights will be appreciated!

All the best,
Shin

User avatar
cringle
SmartPLS Developer
Posts: 791
Joined: Tue Sep 20, 2005 9:13 am
Real name and title: Prof. Dr. Christian M. Ringle
Location: Hamburg (Germany)
Contact:

Re: P Value in Confirmatory Tetrad Analysis (CTA)

Post by cringle » Sun Nov 05, 2017 5:43 pm

Well, all cut-off values and rules of thumb are arbitrary. We selected 10%, which certainly is more liberal than 5%, since we use the Bonferroni correction to account for the multiple testing problem. The Bonferroni correction reduces the probability of error depending on the number of simultaneous tests. Stricter levels of say 5% or 1% most likely entail that none of the results become significant. Below you find an example (i.e., the QUAL construct of the corporate reputation example). For the 10% and 5% level, you would reject the null hypothesis of a reflective measurement model (or effect indicator model or common factor model) and opt for the formative (or composite indicator or composite) alternative. However, in the 1% case, this conclusion does not hold any more. But in the all cases, especially in the latter one, the adjustment is relatively harsh.

What are your options. (1) Try to make the above argument (i.e., more liberal probability or error in combination with the conservative Bonferonni correction) - risky... (2) Use the 5% level as requested by the reviewer - best option if the results are reasonable (3) Use the 5% level and a less conservative correction for the multiple testing problem - best option in case option (2) does not deliver the expected outcomes.

Best
Christian



10% two-sided 5% two-sided 1% two-sided
QUAL CI Low adj. CI Up adj. CI Low adj. CI Up adj. CI Low adj. CI Up adj.
1: qual_1,qual_2,qual_3,qual_4 -0.213 0.553 -0.267 0.572 -0.333 0.654
2: qual_1,qual_2,qual_4,qual_3 -0.162 0.609 -0.215 0.652 -0.273 0.73
4: qual_1,qual_2,qual_3,qual_5 0.084 0.899 0.03 0.94 -0.027 1.007
6: qual_1,qual_3,qual_5,qual_2 -0.677 0.009 -0.694 0.005 -0.767 0.06
7: qual_1,qual_2,qual_3,qual_6 -0.018 0.808 -0.075 0.832 -0.159 0.921
10: qual_1,qual_2,qual_3,qual_7 -0.122 0.68 -0.127 0.667 -0.226 0.763
13: qual_1,qual_2,qual_3,qual_8 -0.186 0.435 -0.218 0.45 -0.276 0.498
17: qual_1,qual_2,qual_5,qual_4 -0.506 0.498 -0.555 0.497 -0.663 0.598
23: qual_1,qual_2,qual_7,qual_4 -0.316 0.642 -0.334 0.651 -0.422 0.745
26: qual_1,qual_2,qual_8,qual_4 -0.648 0.112 -0.671 0.139 -0.745 0.183
30: qual_1,qual_5,qual_6,qual_2 -0.259 0.507 -0.255 0.545 -0.318 0.607
33: qual_1,qual_5,qual_7,qual_2 -0.281 0.424 -0.295 0.474 -0.333 0.52
42: qual_1,qual_6,qual_8,qual_2 -0.569 0.209 -0.601 0.246 -0.65 0.293
73: qual_1,qual_3,qual_7,qual_8 -0.345 0.451 -0.367 0.47 -0.415 0.528
85: qual_1,qual_4,qual_6,qual_7 -0.557 0.32 -0.57 0.328 -0.633 0.381
97: qual_1,qual_5,qual_6,qual_8 -0.313 0.41 -0.321 0.441 -0.42 0.526
100: qual_1,qual_5,qual_7,qual_8 -0.266 0.416 -0.316 0.478 -0.348 0.515
110: qual_2,qual_3,qual_6,qual_4 -0.136 0.648 -0.239 0.707 -0.245 0.738
121: qual_2,qual_3,qual_5,qual_7 0.011 0.944 0.034 0.953 -0.058 1.041
156: qual_2,qual_6,qual_7,qual_5 -0.361 0.545 -0.365 0.59 -0.472 0.685

skono
PLS Junior User
Posts: 8
Joined: Wed Aug 17, 2016 4:02 pm
Real name and title: Shintaro Kono, Ph.D Candidate

Re: P Value in Confirmatory Tetrad Analysis (CTA)

Post by skono » Sun Nov 05, 2017 7:39 pm

Hi Christian,

Thank you very much for your detailed response.

I also read some literature on the issues around Bonferroni and thought of using an alternative method -- the one in which different levels of p value are applied to different individual tests within a given multiple test, and the strictest p value was only applied to the "most significant" test (I forgot the name). But then, I realized that the following quote from Armstrong (2014) indicates that CTA is one of those rare conditions in which the use of Bonferroni logically makes sense: The Bonferroni correction is appropriate if "a single test of the ‘universal null hypothesis’ (Ho) that all tests are not significant is required" (p. 502).

I do agree with the arbitrariness of 5, 10, or whatever cutoff points; this is the type of issues that persist in applied fields like mine. I actually tried 5% and the results were almost the same (i.e., there was only one in 21 CTA, where the original finding of a non-zero tetrad became non-significant due to the confidence level change).

So, at the end of the day, your solution (2) worked. I wonder if another way may be to potentially switch to Bauldry and Bollen's (2016) CTA command in Stata context (or the original Bollen's work) to use a single test approach...

Anyways, thank you very much again for such a quick help!

Best,
Shin

Post Reply