loadings value

Frequently Asked Questions about the SmartPLS software. Please refer to this section first, if you have problems with the software.
User avatar
Diogenes
PLS Super-Expert
Posts: 905
Joined: Sat Oct 15, 2005 5:13 pm
Real name and title:
Location: São Paulo - BRAZIL
Contact:

Post by Diogenes » Sat Jan 08, 2011 12:37 pm

Hi Shan,

FORNELL, C.; LARCKER, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, v.18, p.39-50. http://skylab.mbaedu.cn/PMH/Data/article-8.pdf
HENSELER, J.; RINGLE, C. M.; SINKOVICS, R. R. (2009). The use of partial least squares path modeling in International Marketing. Advances in International Marketing, v.20, p.277-319. http://php.portals.mbs.ac.uk/Portals/49 ... cs-PLS.pdf



Hi Brisbane,

What happen if the outer loadings for INDICATORS are below 0.7?
Outer loading below .7 --> some possible consequences are:
[1] --> AVE below .5 (problem with the convergent validity) --> squared root of AVE < correlations between LV --> (problem with discriminant validity)
[2] --> composite reliability < .7 --> (problem with reliability)


What was the problem actually?, it is caused by most of responses across constructs are 1,2,3 , which group as Strongly Disagree etc..In fact this could be the cause, but the content of this indicator related to the others indicators and the constitutive definition of the LV was Ok?
Did you have done some pretest (content validity and face validity)?
See
Netemeyer, R. G.; Bearden, W. O.; Sharma, S. (2003). Scaling Procedures: issues and applications. Thousand Oaks: Sage Publications, Inc.


Hi Ali,
It was explained above.
If we remove the worst indicators (low outer loadings – below .7) the AVE and reliability will increase, but we must remember two issues:
- The remain indicators kept the meaning of the LV (content validity)?
- The initial idea was confirmatory? If you want to keep a confirmatory status for your model, you should have a second sample to validate the model that was adjusted to the data in the first run.

Best regards,

Bido

iris_afandiphd
PLS Senior User
Posts: 28
Joined: Thu Jun 24, 2010 5:38 am
Real name and title:

Post by iris_afandiphd » Sat Jan 08, 2011 3:43 pm

Prof Bido,

Thank you for your simple and easy to understand explanation.

Yes, the pilot-test have been done using 43 respondents, for face validity test . The results show that it's OK.

What happen in the data analysis part, it seems that all indicators should be dropped if below the loadings below than 0.7. However, if I use 0.4 (Hulland,1999), it should be OK.

Thanks

rayouby
PLS Junior User
Posts: 5
Joined: Thu Jul 26, 2007 2:07 am
Real name and title:

Outer Loadings for Formative LV

Post by rayouby » Fri Jan 21, 2011 7:31 am

Hello Professor Bido,

Do the rules you stated so kindly above change for Formative LV?

Regards,
Reem
Reem Ayouby, M.Sc.
Ph.D. Student
JMSB, Concordia University
Reem.Ayouby@gmail.com
r_ayouby@jmsb.concordia.ca

User avatar
Diogenes
PLS Super-Expert
Posts: 905
Joined: Sat Oct 15, 2005 5:13 pm
Real name and title:
Location: São Paulo - BRAZIL
Contact:

Post by Diogenes » Fri Jan 21, 2011 1:19 pm

Hi,

In the formative model we do not expect that the indicators were correlated, for this reason, AVE, outer loading, and composite reliability are not used to assess the validity and reliabity.

See the Journal of Business ReviewVolume 61, Issue 12, December 2008
Formative Indicators – (special issue with 10 articles)
http://dx.doi.org/10.1016/j.jbusres.2008.01.009

Best regards,

Bido

haslindar
PLS Junior User
Posts: 8
Joined: Sat Jan 30, 2010 3:40 am
Real name and title:

Post by haslindar » Tue Jan 25, 2011 10:52 pm

Thank you very much Prof Bido for the simple explanation.

But, can you explain further when you mention about "having the second sample to validate the model that was adjusted to the data in the first run". Does that mean I should collect another set of data?

Thanks a lot.

Regards,
Haslinda

User avatar
Diogenes
PLS Super-Expert
Posts: 905
Joined: Sat Oct 15, 2005 5:13 pm
Real name and title:
Location: São Paulo - BRAZIL
Contact:

Post by Diogenes » Tue Jan 25, 2011 11:39 pm

Hi,

If we adjust the model to the data = exploratory context (the model will be Ok for the sample)

If we test the model (without modifications) = confirmatory context, probably more generalizable.

Just this.

Best regards,

Bido

Post Reply