top of page
Writer's pictureStephen Biss

Scientific Validity of Feature Comparison

Updated: Aug 3, 2022

With respect to error rates in evidentiary breath testing, please see the blog entry "Error Rates and Drift in Precision".


In September 2016, the President's Council of Advisors on Science and Technology delivered its report: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature Comparison Methods. We should pay close attention to this Report in Canada when we consider, in Court, the scientific reliability of DNA analysis, bitemark analysis, latent fingerprint analysis, firearms analysis, footwear analysis, and hair analysis.

In their covering letter to President Obama the authors wrote the following:

"PCAST concluded that there are two important gaps: (1) the need for clarity about the scientific standards for the validity and reliability of forensic methods and (2) the need to evaluate specific forensic methods to determine whether they have been scientifically established to be valid and reliable. Our study aimed to help close these gaps for a number of forensic “feature-comparison”methods—specifically, methods for comparing DNA samples, bitemarks, latent fingerprints, firearm marks, footwear, and hair."

Feature comparison methods are (page 1):

"“feature-comparison” methods—that is, methods that attempt to determine whether an evidentiary sample (e.g., from a crime scene) is or is not associated with a potential “source” sample (e.g., from a suspect), based on the presence of similar patterns, impressions,

or other features in the sample and the source. Examples of such methods include the analysis of DNA, hair, latent fingerprints, firearms and spent ammunition, toolmarks and bitemarks, shoe prints and tire tracks, and handwriting."

Feature-comparison methods are used a lot in Canadian Courts. Police collect evidence and attempt to match DNA in sexual assault cases, latent fingerprints of juveniles in break & enter cases, firearms in weapons charges, and tire tracks in dangerous driving charges.

The President's Council discovered from literature review that there were serious problems with feature-comparison (page 16):

"The questions that DNA analysis had raised about the scientific validity of traditional forensic disciplines and testimony based on them led, naturally, to increased efforts to test empirically the reliability of the methods that those disciplines employed."

"...reviews have found that expert witnesses have often overstated the probative value of their evidence, going far beyond what the relevant science can justify. Examiners have sometimes testified, for example, that their conclusions are “100 percent certain;” or have “zero,” “essentially zero,” or “negligible,” error rate. As many reviews—including the highly regarded 2009 National Research Council study—have noted, however, such statements are not scientifically defensible: all laboratory tests and feature-comparison analyses have non-zero error rates."

There are and will be current and newly developed forensic feature-comparison technologies. All of them require assessment of foundational validity. In other words, is this type of forensic matching reliable enough that we should be using it in a Youth Court or a criminal adult Court? The Council recommended independent and ongoing evaluations and assessments of the foundational validity of feature-comparison methods. Such evaluations need to be done by an agency or agencies with no stake in the outcome. The evaluations of foundational validity need to be independent of police. They need to be based on real empirical science - assessment of error rates.

Literature reviews studied by the Council indicated error rates as high as:

  1. 11% error rate in hair comparison analysis.

  2. No appropriate empirical studies to support the foundational validity of footwear analysis to associate shoeprints with particular shoes.

  3. Bitemarks do not meet the scientific standards for foundational validity, and are far from meeting such standards.

  4. Error rates in firearms analysis of 1 in 19 or 1 in 46.

  5. False positives in fingerprint comparison of 1 in 18 and 1 in 306.

The Council suggests that error rates should be presented when matters go to Court and the testimony should not over-state what has been empirically established. The Judge or Jury should hear about the error rates associated with the particular type of feature comparison.

Lawyers need to find out through disclosure or their own literature review:

  1. Error rate studies, including those conducted on case-like samples.

  2. Studies that demonstrate scientific validity.

  3. The Proficiency tests and test results for the examiner.

  4. Audits documenting errors or anomalies in the laboratory

  5. Accreditation

John Oliver did an entertaining review of this report "Last Week Tonight John Oliver HBO 10-01-17:

12 views0 comments

תגובות


bottom of page