top of page

Error Rates and Drift in Precision

Updated: Oct 10, 2022


Purpose:


To establish that the ATC/CFS is not sampling from aging instruments out in the field to determine error rates


To establish that the ATC/CFS is not allowing for drift in precision of instruments as they age without re-calibration


To suggest possible acceptable or normal error rates


Sample cross-examination of a CFS expert on an instrument that has an error rate of 22% in the 50 cal. checks prior to the subject tests


Excerpt from IACT - International Association for Chemical Testing Newsletter
ACT Newletter March 2011 excerpt

Q. Now, I just want to go back to the question of the last 50 cal checks and, and error rates on instruments. And I’m going to show you a paper in the newsletter in the, that appeared in the IACT, I-A-C-T, newsletter. It’s the International Association for Chemical Testing. Are you familiar with that organization?

A. Yes.

Q. And it’s under authored by someone by the name of Rod Goldberg...

A. Who works for...

Q. ...who works for Washington State Patrol.

A. Yes, it is.

Q. All right, you’ve heard of that person before?

A. Yes.

Q. All right, now that individual is taking a look at the whole question of error rate analysis with respect to evidentiary breath testing in the article.

A. Without reading it, I will take it at your word.

Q. All right, and he starts out in his example saying,

Assume that a particular breath test instrument has experienced a calibration check error rate of three percent of its tests over a two-year period. During this time we assume the...

A. Sorry, can you point me to where you’re reading that?

Q. Oh sure, under the word example.

A. Okay.

Q. It’s right here.

A. Yes.

Q. On the second column on the front page.

Assume that a particular breath test instrument has experienced a calibration check error rate of three percent of its tests over a two-year period. During this time, we assume that the instrument has been operating in a state of statistical control and thus accept the three percent error rate as the common cause rate or normally expected rate of occurrence.

Right?

A. That’s what it says.

Q. In Ontario, do we have any information about the error rate of calibration checks for approved instruments out in the field?

A. No, we do not.

Q. No one is collecting data on it?

A. That is correct.


[Quaere: Is this a systemic problem in Ontario that should result in section 8 challenges by defence counsel?]


Q. Whatsoever.

A. Calibration checks at the time of the breath test are what is used to determine the instrument is in proper working order. Washington State does not do calibration checks with every breath test. They do unknown samples on a monthly basis.

Q. Yes.

A. To verify that.

Q. Yes. But with respect to the instruments that out [sic], are out in the field in Ontario, nobody is following up with respect to calculating what is the normal error rate for control checks with respect to Ontario’s Intoxilyzer 8000Cs.

A. Correct.

Q. And in the particular case before the court, we’ve got an error rate — I mean, you, you feel that with respect to the last 50 calibration checks preceding the subject test of Mr. H...

A. Yes.

Q. ...your perspective is that 11 of those, 11 out of the 50 were failures?

A. Yes.

Q. That’s a 22 percent failure rate.

A. If you’re using that percentage, yes.

Q. I want to suggest to you that that is a huge failure rate compared to what we should normally be experiencing with respect to an Intoxilyzer 8000C that’s working properly.

A. The instrument is working properly. It’s detected a concentration that is below the acceptable range and the breath tech took some kind of action, as required, in order to rectify that problem.

Q. But isn’t the...

A. Because it’s unacceptable and breath testing, again, would not proceed if that was a result that was obtained during an actual subject test.


[Quaere: But it's still an error rate, an error rate that would not necessarily be apparent to the QT who is conducting subject breath tests a few hours or a few days following the tests that resulted in control test failures. Perhaps this cross-examination should have gone in a different direction at this point and the defence should have called an expert on error rates such as the author of the IACT article.]


Q. But isn’t the big issue the calibration of the instrument? And by calibration I’m talking about calibration across the measuring interval. Isn’t that the big issue that we’re looking for if we’re doing any kind of error rate analysis?

A. If you were doing it, yes.

Q. And in this case, we’ve got an error rate of 22 percent, which I’m suggesting to you is a huge error rate.

A. Using the data that you provided, yes. If you look at the data that I provided, no, it’s not. And that was the additional 11 samples that I went through with my colleague over the phone and got 11 additional calibration checks that were acceptable.

Q. So we toss out...


[Respectfully, the CFS expert is suggesting that we ignore the inconvenient data.]


A. So there’s variability associated with those calibration checks. There’s no denying that. I mean, you look at the data, you can see there’s a wide range from anywhere from 91 to 97.

Q. Yes, but that variability — I mean — and one of the problems with variability in evidentiary breath testing is that we have all sorts of different sources of variability. There is variability...

A. Yes.

Q. There is human variability and there is instrumental variability and there are all sorts of other variabilities attached to all different aspects of the process.

A. Subject variability, yes.

Q. And usually subject variability is the big one.

A. Oh yes.

Q. But in our particular case...

A. But not with calibration checks; with the subject tests.

Q. But in our particular case, we’ve got a situation where, notwithstanding whatever other variabilities there are that are out there, we have an instrument that is wandering, I’d respectfully suggest to you, in a very, very wide range of values. We are seeing in the last 50 calibration checks on the instrument, a wide variation in results as expressed by the standard deviation calculation that I put to you. We’re seeing the instrument producing indications that show huge analytical variability; not normal analytical variability.

A. That’s your language, and that’s your opinion. In my opinion, this is normal variability that you see in breath testing.

Q. But the normal analytical variability that’s talked about in the training is what?

A. Analytical. Subject.

Q. The analytical variability.

A. Yes.

Q. What’s the analytical variability that’s noted in the Intoxilyzer 8000C training manual?

A. Probably three percent or less.

Q. But here we have results that are, even at 100 milligrams per 100 mils, without even getting into the question of linearity, we have an instrument that is wandering with results that are consistently low. That’s, after all, the reason why the instrument was taken out of service. [immediately following the subject tests that were the subject of this litigation]

A. You’re using the word wandering as if the instrument calibration is moving all over the place. The calibration probably hasn’t shifted but the results that are being obtained by it are variable and given the environment that these tests are obtained under, and I, I think I referred to that last day, that in a field sort of situation, there’s always going to be more variability associated with those results than there is going to be in the laboratory where you have a controlled environment.

Calculation of standard deviation for last 50 cal. checks
Spreadsheet calculation by the cross-examiner of standard deviation respecting the last 50 cal. checks prior to the client's subject tests, all wet-bath temperatures 34.0 +/- .2 C.

Q. And so you’re suggesting that receiving results that result in a standard deviation well beyond three milligrams per 100 mils [3.859 in the spreadsheet calculation above] is acceptable precision in an Intoxilyzer 8000C?

A. I don’t believe I said that, but when you calculate it using the data that I use, it’s less than three percent. It’s actually 2.05 percent.


[During the coffee break the witness, with the help of another CFS scientist, prepared his own standard deviation calculation removing (what i call) the "inconvenient data".


Q. And that’s because you are throwing out — you’re randomly throwing out some of the data.

A. I’m not randomly throwing out data. If I was doing that I would be taking the highest results as well. But in this case, I’m using results that are within the acceptable range, right? We know that the calibration check can vary; it does. It’s from instrument to instrument, from location to location, from operator to operator, from the solutions that are used, the environment that it finds itself in with respect to ambient alcohol or temperature; all those factors can affect the calibration check result that’s obtained. It doesn’t necessarily mean that the instrument is fluctuating with respect to its calibration. The results are changing, yes. And that’s going to be based on each individual situation. But in this case, we have two calibration checks for this subject that were obtained, that were acceptable within the acceptable range, and two breath tests that were taken at least 15 minutes apart that are within good agreement. And that criteria, along with the diagnostic tests that are conducted during each of the subject tests, were also successfully passed. And that gives me the confidence that the results are reliable and accurate.


[Should a scientist throw out inconvenient data and replace the inconvenient fact, of a standard deviation of 3.89, i.e. well beyond 3.0? Replace the inconvenient fact with rules-of-thumb: two cal. checks, within the acceptable range, and two breath tests 15 minutes apart with 02 agreement? When the manufacturer's specifications advertise a precision - a std. deviation of better than 3?

Shouldn't forensic "science", if it is a science, require studies of error rates for the equipment and methodology used by police? With respect, ignoring error rates, is not good science. It is an indication of systemic problems in forensic science in Ontario. Our colleagues in the United States are attempting to address the issue of error rates in forensic science. It is an uphill battle. See the Obama report on error rates and forensic science generally.

bottom of page