Just received my Guru Wax Report and it looks like (to me) they are double counting.
They calculate a a grade point average as a function of:
40% Performance
30% Durability
20% Ease of Use
10% Availability
However Performance is the average of the subjective weekly performance (how it looks) scores over an 8 week period.
So the performance scores inherently factor in durability in the first place.
Interestingly P21S:
* Rated as good or better looking for 5 of the 8 weeks (better 1-3, equal 4-5),
* Rated just as easy to use as Zaino
* Better availability
Had a GPA well below Zaino? Did I miss something?
P21S ended with a B+ for performance (same as Zaino) because durability fell off (a little) in week 6 and collapsed in week 8, after a snow and ice storm in week 7 (where they could not do any measurements).
Now Zaino is a much more durable product, but it seems it has benefited twice in the overall GPA because of durability. Durability is scored seperatly and then again factored into the overall performance score. Again it seems like double counting to me.
Overall the report looks great, but this final score thing looks inaccurate.
They calculate a a grade point average as a function of:
40% Performance
30% Durability
20% Ease of Use
10% Availability
However Performance is the average of the subjective weekly performance (how it looks) scores over an 8 week period.
So the performance scores inherently factor in durability in the first place.
Interestingly P21S:
* Rated as good or better looking for 5 of the 8 weeks (better 1-3, equal 4-5),
* Rated just as easy to use as Zaino
* Better availability
Had a GPA well below Zaino? Did I miss something?
P21S ended with a B+ for performance (same as Zaino) because durability fell off (a little) in week 6 and collapsed in week 8, after a snow and ice storm in week 7 (where they could not do any measurements).
Now Zaino is a much more durable product, but it seems it has benefited twice in the overall GPA because of durability. Durability is scored seperatly and then again factored into the overall performance score. Again it seems like double counting to me.
Overall the report looks great, but this final score thing looks inaccurate.