In this post, I write about the issues with the “science” of latent fingerprint analysis. Latent fingerprint analysis is the method by which an analyst compares an unknown fingerprint (latent print) recovered from some crime scene, item, or other source and compares it to a known print. After comparing the known and unknown prints, the analyst then makes a determination as to whether the unknown print matches the known print, thereby leading to an identification.
Crime shows like CSI, NCIS, and the like have created many misperceptions about the collection, importance, and use of crime scene evidence, including fingerprints. These shows stress the scientific nature of criminal investigations, suggesting almost an infallibility on the part of the scientists on the shows (I mean, I’ve never seen them get it wrong on CSI…). This has likely contributed to jurors in real cases seeing forensic evidence as the sine qua non, or absolutely necessary item, for proof of guilt. Yet, not all forensic sciences are created equal, and the methodology behind fingerprint analysis is particularly problematic.
The President’s Council of Advisors on Science and Technology (PCAST) published a report in September 2016 that set out a number of concerns regarding the methodologies behind latent print analysis (and other feature-comparison methods such as DNA analysis, bite mark analysis, etc.). Regarding fingerprinting specifically, PCAST wrote, “There are also nascent efforts to begin to move the field from a purely subjective method – although there is still a considerable way to go to achieve this important goal.” PCAST Report at 88 (emphasis added).
It is quite shocking how subjective fingerprint analysis really is, and even more shocking that a field that has been around since the 1800s is only now beginning to become objective. The ACE or ACE-V method used by latent print analysts consists of three or four steps: Analysis, Comparison, Evaluation, and Verification (optional – how can this be?). At each stage purely subjective decisions are being made by the analyst. PCAST Report at 89-90.
In the Analysis step, the analyst must analyze the latent fingerprint and mark features of interest on it. When deciding what features to mark, the analyst has no particular guidelines to follow. Rather, they simply choose what they think is a feature, even though it could be a smudge, imperfection, distortion, or something else. There are no guidelines the analyst must follow in determining what “features” to mark or how many features must be marked before comparison can proceed. Thus, an analyst could mark just one or two features and attempt an analysis if he or she chose.
In the Comparison step, the analyst compares the known print to the latent print. During this step, the analyst uses the features the analyst marked on the latent print and attempts to compare them to the known print. This leads seamlessly into the Evaluation phase, which is when the analyst determines whether there is a match between the latent and known print.
The analyst’s final conclusion and comparison of features are again purely subjective. No standards exist regarding how many features must match, how close the match has to be, or anything else. Thus, even if an analyst sees differences between the latent and known print, they could still conclude based on their training and experience (and lacking any objective standards) that there was a match.
Once the evaluation is complete, there may or may not be a Verification step! This is the most difficult aspect of all to understand: how can a purely subjective method NOT be verified? This is beyond comprehension and seriously undercuts any credibility surrounding these tests.
Equally problematic is the nature of the verification method when used. According to PCAST’s research, in most labs “only identifications are verified, because it is considered too burdensome, in terms of time and cost, to conduct independent examinations in all cases (for example, exclusions).” PCAST Report at 89-90. This makes the verification “not blind” because “the second examiner knows the first examiner reached a conclusion of proposed identification, which creates the potential for confirmation bias.” Id. at 90.
In addition to the completely subjective analysis problems, there are also significant problems in terms of the language used by analysts when testifying in court. Prior to 2009 the FBI believed that “Of all the methods of identification, fingerprinting alone has proved to be both infallible and feasible.” PCAST Report at 87 (quoting Federal Bureau of Investigation, The Science of Fingerprints, U.S. Government Printing Office, (1984) at iv) (emphasis added).
The FBI’s position on fingerprinting no doubt bolstered analysts’ confidence to testify “that their conclusions are ‘100 percent certain;’ have ‘zero,’ ‘essentially zero,’ vanishingly small,’ ‘negligible,’ ‘minimal,’ or ‘microscopic’ error rate; or have a chance of error so remote as to be a ‘practical impossibility.’” PCAST Report at 29. Yet, as PCAST observed, these conclusions are not supported by the science. PCAST Report at 29. To even make such representations, one might assume that there would be error-rate studies that bolstered that position. However, the error-rate studies that have been conducted thus far to test the fallibility of latent print analysis have been far from perfect and varied widely in terms of error rates. PCAST Report at 91-98.
Given the foregoing problems with fingerprint analysis, it is hard for me understand how anyone can accept it as a science and also base a decision to convict or acquit a person because of the subjective findings of one, or maybe two (if verification is used), analysts. It also makes situations like the one currently pending in Orlando, Florida worthy of close scrutiny.
The Orlando Sheriff’s Office recently removed from duty an experienced latent print analyst. Marco Palacio had 18 years of experience at the Orange County Sheriff’s Office, but he was let go due to “clerical errors” in his work on fingerprint cards. Apparently, he mismarked some cards and made other similar errors. Over 2,600 lawyers in the Orlando-area have since been notified, but they are left having to sort through the paperwork to figure out if Mr. Palacio affected one of their clients.
Given the nature of the issues (i.e., carelessness and lack of attention to detail) on the part of Mr. Palacio, how can ANY of his conclusions be considered legitimate in the context of a methodology that is purely subjective and requires close attention to detail? Unfortunately, it will probably be very difficult to determine whether any analysis or comparison issues were present, and Mr. Palacio will likely defend his conclusions as infallible, just like he was taught in the late 90s when he was trained.