PLS Predict: interpreting RSME, MAE, and MAPE

Questions about the implementation and application of the PLS-SEM method, that are not related to the usage of the SmartPLS software.
Post Reply
jjdekker
PLS Junior User
Posts: 3
Joined: Thu Oct 29, 2015 1:59 pm
Real name and title: drs. Hans Dekker

PLS Predict: interpreting RSME, MAE, and MAPE

Post by jjdekker »

As the resources page on SmartPLS.de does not yet provide any guidance on how to interpret RSME, MAE, and MAPE.What are acceptable scores for these three measures and why?

Thanks in advance.
Hans
jmbecker
SmartPLS Developer
Posts: 1282
Joined: Tue Mar 28, 2006 11:09 am
Real name and title: Dr. Jan-Michael Becker

Re: PLS Predict: interpreting RSME, MAE, and MAPE

Post by jmbecker »

There are no acceptable scores or thresholds for these values. These criteria are used to compare models. The smaller the values the better the model.
Dr. Jan-Michael Becker, BI Norwegian Business School, SmartPLS Developer
Researchgate: https://www.researchgate.net/profile/Jan_Michael_Becker
GoogleScholar: http://scholar.google.de/citations?user ... AAAJ&hl=de
Piddzilla
PLS Junior User
Posts: 7
Joined: Tue Nov 22, 2016 10:39 am
Real name and title: Peter Bergwall, Doctoral student

Re: PLS Predict: interpreting RSME, MAE, and MAPE

Post by Piddzilla »

jmbecker wrote:There are no acceptable scores or thresholds for these values. These criteria are used to compare models. The smaller the values the better the model.
Isn't the mean absolute percentage error (MAPE) an indication of the uncertainty of the prediction?

If the MAPE of a model displays a value of, say, 2.5 %, then we can say that the evaluated data varies 2.5 % from our predicted model. In other words, our prediction was 97.5 % correct. If that is good or bad depends on the specific situation - sometimes being 60 % correct is good enough and sometimes being 99 % correct is not good enough. However, when comparing models that are supposed to predict the same thing, 2 % is of course "better" than 2.5 %. But is it significantly "better"?

When it comes to RSME and MAE I have not quite yet figured out how to interpret them besides the fact, as jmbecker stated, "the lower, the better". But I think that RSME should be related to the units used to measure data. That is, an RSME value corresponding approximately to (MAPE = 2 %) could be a value of 0.01 just as well as a value of 1 000, all depending on the size (and the mean) of the units. The MAPE produces percantages while the MAE and the RSME produces absolute values (I think).

Again, when comparing the predictive quality of different models, the model displaying the lowest RMSE has the higher predictive quality.

Since RSME, MAE, and MAPE, respectively, has their weaknesses (one is too conservative and another is too liberal, sort of), they should be considered together when assessing predictive qualities of a model.

These are all speculations - I initially went here myself to look for answers to these questions. :)

EDIT: Just ran the PLS predict algorithm on the PLS-SEM BOOK: Corporate Reputation Extended model to see what happened. The RSME and the MAE measures seem to measure the "same thing", although the RSME values are always higher (more conservative) compared to the MAE measures (more liberal). Among the constructs in the model, CUSA displays the lowest values for all three criteria (RSME = 1.1030, MAE = 0.799, and MAPE = 18.6 %).
Post Reply