After using Scanner for a short time now, I think that this feature would provide huge value.
One of the most critical parts to successful tuning is deciding what data is good and what is junk. Features like filtering help to weed out errant data, but it doesn't help to determine how scattered or precise the remaining data is, for a given histogram chart.
Example: You are using STFTs to tune some table. You map out your axes and labels to match some ECM table and populate with STFTs. It might be tempting to just copy those averages and apply them to correct the table. But, if the trim data is all over the place, there isn't a strong correlation to justify editing the table.
If you toggle between maximum and minimum values, and see a max of +25% and a min of -29%, it's probably best to be suspicious that the average of +5% might not be very significant. In reality, 90% of your cell data might be within a few percent of that +5% average, with only a few outliers giving you the large max/min. Or your data could be mostly over +20% and under -20%. You really need to use some type of statistical analysis to determine the confidence interval of your data:
https://en.wikipedia.org/wiki/Confidence_interval
If Scanner allowed you to set a range, let's say +/- 4%, it could calculate the percentage of the data which falls within +/-4% of the average, and then color the cell based on that calculated value.
Maybe you say that if 80% of that cell data falls within +/-4% of average, we color it green; and if only 20% of the data is within +/-4%, we color it red.
Being able to set up histogram charts like this would allow you to plot fuel trims (or other relevant PIDs) against a number of different parameters (MAF period, fuel flow rate, fuel load, temps, etc) and see which has better correlation, guiding you towards what needs to be tuned the most.
Thoughts?