One thing we've noticed is if you're using a VE-AFR Error histogram to tune VE tables, you have to smooth the adjusted area before performing another scan-adjustment cycle. Otherwise the HPT scanner generates AFR error percentages that are incorrect for the cell to which it's reported.
My theory is that the OS must use an averaging calculation for MAP readings that fall between VE table axis values, so if there’s a spike in one of the averaged cells it calculates a value that’s incorrect for the cell to which it’s reported in the histogram.
For example, let’s say the PCM gets a MAP reading of 93kPa, it would use the average of the VE values from columns 90kPa and 95kPa in the VE table, but HP Tuners reports the AFR error in the 95 kPa cell of the histogram (it chooses the closest column; 93 is closer to 95 than 90). Suppose there was spike value of 1000 in the 95kPa cell and the 90kPa cell had a value of 500, the PCM would calculate an average of 750 for the 93kPa reading. Which is 25% lower than the value in the cell to which HPT assigns the AFR error percentage. So, if the AFR error percentage is 12% at 93 kPa, HPT assigns 12% to the 95kPa column in the histogram. The problem is that that column already has a VE value 25% higher than the calculated average!
I would expect the HPT scanner to either,
1) Not report AFR error values to a cell unless the scanned MAP value is within 1-3% of that column value.
2) Adjust the AFR error such that its proportional to the difference between the cells in proportion to the scanned MAP value.
Hopefully that makes sense to someone there at HPT and they can confirm or deny this is an issue.