I have street tuned several turbo japanese vehicles and decided to transfer to the domestic world. I am a seasoned reader, and have some calibration books in my repertoire as well as some physics books as I am currently attending a university for an engineering degree. But in one of the books I have by Greg Banish, which should be a known guy in this forum I am assuming, there is a statement and reads as follows:
" For example, one may choose to test going from 2,000 RPM and 50 kPa to 2,000 RPM and 70 kPa. This simple tip-in event is typical of a normal traffic event in the vehicle"
I have never been so profoundly curious over this statement until today, hence me signing up to ask this. As far as I know there is a linear form equation when kpa and rpm are charted in the y and x axis respectively. The line has a positive slope.
The question is this: "How can someone be at steady state in 2000 rpm at one point and have 50 kPa or 50% load, and then at the same time magically increase the load without accelerating or doing any TPS changes?"
Correct me if I am wrong as I am not the best in tuning, but if the data plot forms a linear graph then how can 2000 rpm have TWO values of 50 kpa and 70 kpa?
Is this done in a mustang dyno, chassis dyno where the bearings increase kinetic friction and thus create more load on the engine? something in the lines of " a truck at 2000 rpm with no trailer will be in the 50 kpa cell, and a truck with a trailer attached will be in the 70 kpa cell at 2000 rpm"
is that how it works?
BY THE WAY, why does the HPtuners software measure MAP positive across the board? is it magnitude only? because 100 kPa at 7000 rpm would be so much "boost" in the intake manifold. I am assuming anything under 100 kpa is a negative value which in this case has only magnitude and is reported as positive pressure instead of vacuum.