Composite relaibility

Frequently asked questions about PLS path modeling.
Post Reply
nikolas7
PLS User
Posts: 14
Joined: Sun Jul 26, 2015 1:28 pm
Real name and title: Mr. Nikolas Plouti

Composite relaibility

Post by nikolas7 » Sun Aug 02, 2015 5:39 pm

Hi all,

I have a reflective model and there is an issue with my measurement model. My composite reliability of some items is above 0.95, is that a problem? And what can i do about it.
Any help is appreciated.

Thank you

jmbecker
SmartPLS Developer
Posts: 842
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: Composite relaibility

Post by jmbecker » Mon Aug 03, 2015 8:54 am

Why should a high composite reliability be a problem? Usually you want a high composite reliability.
Dr. Jan-Michael Becker, University of Cologne, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Ja ... v=hdr_xprf
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de

nikolas7
PLS User
Posts: 14
Joined: Sun Jul 26, 2015 1:28 pm
Real name and title: Mr. Nikolas Plouti

Re: Composite relaibility

Post by nikolas7 » Tue Aug 04, 2015 6:31 pm

Hi Jmbecker,

Thank you for your response. I am asking this question because at the book A primer on Partial Least Squares Structural Equation Modelling (Hair et all i read the following:
values above 0.95 are not desirable because because they indicate that all indicator variables are measuring the same phenomenon and are therefore unlikely to be a valid measure of the construct.

Is that a valid argument or not.

jmbecker
SmartPLS Developer
Posts: 842
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: Composite relaibility

Post by jmbecker » Wed Aug 05, 2015 8:12 am

Well, you are here at the heart of measurement theory.
Taken as it is, the sentence is wrong. You want all your items to “measure the same phenomenon” in a reflective model. Otherwise you don’t have a unidimensional constructs.

Practically, the authors point to a problem that often occurs in empirical research where people use redundant items that are not adding any additional information, but only repeat the same aspect of the phenomenon.
“I like product_A”
“I really like product_A”
“I like product_A a lot”

Sure, this measures likeability of a product, but surely could also be measured with a single item without losing information. To some degree, you want your items to tap into different aspects, which are all outcomes of the measured construct and which highly correlate, but are not the same and therefore not redundant.
Hence, a very high composite reliability can point to problems with redundancy in the item definition. You should check that on theoretical ground. It may pose a problem, but it must not. If your items tap into different aspects of your measured constructs and are still highly correlated, than you simply have a good measurement model.
Dr. Jan-Michael Becker, University of Cologne, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Ja ... v=hdr_xprf
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de

nikolas7
PLS User
Posts: 14
Joined: Sun Jul 26, 2015 1:28 pm
Real name and title: Mr. Nikolas Plouti

Re: Composite relaibility

Post by nikolas7 » Sat Aug 08, 2015 9:42 am

OK, Thank you very much, that clears things out very well.

Mikkay
PLS Junior User
Posts: 2
Joined: Wed Jul 18, 2018 6:24 am
Real name and title: Ms Mikkay Wong

Re: Composite relaibility

Post by Mikkay » Wed Jul 18, 2018 7:23 am

jmbecker wrote:
Wed Aug 05, 2015 8:12 am
Well, you are here at the heart of measurement theory.
Taken as it is, the sentence is wrong. You want all your items to “measure the same phenomenon” in a reflective model. Otherwise you don’t have a unidimensional constructs.

Practically, the authors point to a problem that often occurs in empirical research where people use redundant items that are not adding any additional information, but only repeat the same aspect of the phenomenon.
“I like product_A”
“I really like product_A”
“I like product_A a lot”

Sure, this measures likeability of a product, but surely could also be measured with a single item without losing information. To some degree, you want your items to tap into different aspects, which are all outcomes of the measured construct and which highly correlate, but are not the same and therefore not redundant.
Hence, a very high composite reliability can point to problems with redundancy in the item definition. You should check that on theoretical ground. It may pose a problem, but it must not. If your items tap into different aspects of your measured constructs and are still highly correlated, than you simply have a good measurement model.
Dear Dr Michael,

You mentioned that if items tap into different aspects of the measured construct and still highly correlated, it means that the measurement model is good. Is there any paper that I can cite this particular statement?

Thank you

jmbecker
SmartPLS Developer
Posts: 842
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: Composite relaibility

Post by jmbecker » Thu Jul 19, 2018 9:03 am

Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling. write on page 112:
Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct. Specifically, such composite reliability values occur if one uses semantically redundant items by slightly rephrasing the very same question. As the use of redundant items has adverse consequences for the measures’ content validity (e.g., Rossiter, 2002) and may boost error term correlations (Drolet & Morrison, 2001; Hayduk & Littvay, 2012), researchers are advised to minimize the number of redundant indicators.
https://www.smartpls.com/documentation/ ... s-sem-book

It is not specifically mentioned there, but if you do not have semantically redundant items, but items that actually do measure very different aspects of the construct domain (and that is something you need find good arguments for) then the concerns of extremely high reliability are not warranted.

High reliability is generally desirable. Common criteria to measure a constructs’ reliability such as Cronbach’s Alpha and Composite Reliability whose values range between 0 and 1, should be as high as possible to indicate good reliability. Following this logic values above 0.9 or 0.95 are desirable because they indicate nearly perfect reliability of the measures.
However, in empirical research nearly perfect reliability must be regarded a utopia. Empirical research is usually never perfect and there are several concerns that come along with measures of nearly perfect reliability. As mentioned before, high reliability is usually desirable so these problems are mostly of practical nature. They give rise to concerns about an inappropriate data collection (e.g., respondents being inattentive to the questions or following demand effects and therefore answer questions blocks with higher internal consistency than truthful answers would produce) or research strategies that jeopardize construct validity by optimizing the construct development for good fit and high reliability (such as using redundant and synonymous items questions).
If you can rule out these problems you can also be happy about reliability values above 0.95
Dr. Jan-Michael Becker, University of Cologne, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Ja ... v=hdr_xprf
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de

Mikkay
PLS Junior User
Posts: 2
Joined: Wed Jul 18, 2018 6:24 am
Real name and title: Ms Mikkay Wong

Re: Composite relaibility

Post by Mikkay » Sun Jul 22, 2018 1:40 am

jmbecker wrote:
Thu Jul 19, 2018 9:03 am
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling. write on page 112:
Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct. Specifically, such composite reliability values occur if one uses semantically redundant items by slightly rephrasing the very same question. As the use of redundant items has adverse consequences for the measures’ content validity (e.g., Rossiter, 2002) and may boost error term correlations (Drolet & Morrison, 2001; Hayduk & Littvay, 2012), researchers are advised to minimize the number of redundant indicators.
https://www.smartpls.com/documentation/ ... s-sem-book

It is not specifically mentioned there, but if you do not have semantically redundant items, but items that actually do measure very different aspects of the construct domain (and that is something you need find good arguments for) then the concerns of extremely high reliability are not warranted.

High reliability is generally desirable. Common criteria to measure a constructs’ reliability such as Cronbach’s Alpha and Composite Reliability whose values range between 0 and 1, should be as high as possible to indicate good reliability. Following this logic values above 0.9 or 0.95 are desirable because they indicate nearly perfect reliability of the measures.
However, in empirical research nearly perfect reliability must be regarded a utopia. Empirical research is usually never perfect and there are several concerns that come along with measures of nearly perfect reliability. As mentioned before, high reliability is usually desirable so these problems are mostly of practical nature. They give rise to concerns about an inappropriate data collection (e.g., respondents being inattentive to the questions or following demand effects and therefore answer questions blocks with higher internal consistency than truthful answers would produce) or research strategies that jeopardize construct validity by optimizing the construct development for good fit and high reliability (such as using redundant and synonymous items questions).
If you can rule out these problems you can also be happy about reliability values above 0.95
Thank you so much Dr Michael. This is so helpful! Truly appreciate your reply.

Regards,
Mikkay

Derick
PLS Junior User
Posts: 1
Joined: Thu Oct 04, 2018 4:06 pm
Real name and title: Mr. Teoh Kok Ban

Re: Composite relaibility

Post by Derick » Thu Oct 04, 2018 4:17 pm

jmbecker wrote:
Thu Jul 19, 2018 9:03 am
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling. write on page 112:
Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct. Specifically, such composite reliability values occur if one uses semantically redundant items by slightly rephrasing the very same question. As the use of redundant items has adverse consequences for the measures’ content validity (e.g., Rossiter, 2002) and may boost error term correlations (Drolet & Morrison, 2001; Hayduk & Littvay, 2012), researchers are advised to minimize the number of redundant indicators.
https://www.smartpls.com/documentation/ ... s-sem-book

It is not specifically mentioned there, but if you do not have semantically redundant items, but items that actually do measure very different aspects of the construct domain (and that is something you need find good arguments for) then the concerns of extremely high reliability are not warranted.

High reliability is generally desirable. Common criteria to measure a constructs’ reliability such as Cronbach’s Alpha and Composite Reliability whose values range between 0 and 1, should be as high as possible to indicate good reliability. Following this logic values above 0.9 or 0.95 are desirable because they indicate nearly perfect reliability of the measures.
However, in empirical research nearly perfect reliability must be regarded a utopia. Empirical research is usually never perfect and there are several concerns that come along with measures of nearly perfect reliability. As mentioned before, high reliability is usually desirable so these problems are mostly of practical nature. They give rise to concerns about an inappropriate data collection (e.g., respondents being inattentive to the questions or following demand effects and therefore answer questions blocks with higher internal consistency than truthful answers would produce) or research strategies that jeopardize construct validity by optimizing the construct development for good fit and high reliability (such as using redundant and synonymous items questions).
If you can rule out these problems you can also be happy about reliability values above 0.95
Hi Dr. Michael, I have the same problems too and apparently it’s not due to redundancy issues. Hence, could I cite what you have commented from your paper or your book?

Aesop
PLS Junior User
Posts: 1
Joined: Tue Oct 16, 2018 1:35 pm
Real name and title: Danial Devil

Re: Composite relaibility

Post by Aesop » Tue Oct 16, 2018 1:38 pm

jmbecker wrote:
Thu Jul 19, 2018 9:03 am
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling. write on page 112:
Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct. Specifically, such composite reliability values occur if one uses semantically redundant items by slightly rephrasing the very same question. As the use of redundant items has adverse consequences for the measures’ content validity (e.g., Rossiter, 2002) and may boost error term correlations (Drolet & Morrison, 2001; Hayduk & Littvay, 2012), researchers are advised to minimize the number of redundant indicators.
https://www.smartpls.com/documentation/ ... s-sem-book

It is not specifically mentioned there, but if you do not have semantically redundant items, but items that actually do measure very different aspects of the construct domain (and that is something you need find good arguments for) then the concerns of extremely high reliability are not warranted.

High reliability is generally desirable. Common criteria to measure a constructs’ reliability such as Cronbach’s Alpha and Composite Reliability whose values range between 0 and 1, should be as high as possible to indicate good reliability. Following this logic values above 0.9 or 0.95 are desirable because they indicate nearly perfect reliability of the measures.
However, in empirical research nearly perfect reliability must be regarded a utopia. Empirical research is usually never perfect and there are several concerns that come along with measures of nearly perfect reliability. As mentioned before, high reliability is usually desirable so these problems are mostly of practical nature. They give rise to concerns about an inappropriate data collection (e.g., respondents being inattentive to the questions or following demand effects and therefore answer questions blocks with higher internal consistency than truthful answers would produce) or research strategies that jeopardize construct validity by optimizing the construct development for good fit and high reliability (such as using redundant and synonymous items questions).
If you can rule out these problems you can also be happy about reliability values above 0.95
Thanks for the link.

Post Reply