Navigation bar
  Start Previous page  5 of 7  Next page End  1 2 3 4 5 6 7  

51
Scientific Supermen: Fingerprint Experts and Others Who Exceed The Bounds, (1999) 35 Crim.
L. Bull. 234, 243-246 (describing the same two cases, as well as a case in New York and two in
North Carolina); James E. Starrs, A Miscue in Fingerprint Identification: Causes and Concerns,
12 J. of Police Sci. & Admin. 287 (1984) .  
One such case is reported in State v. Caldwell, 322 N.W.2d 574 (Minn. 1982)  The
prosecution’s fingerprint expert in Caldwell, a board certified member of the IAI, with more than
14 years of experience, testified that a particular latent print at issue in the case had been made
by the defendant’s right thumb.  Starrs, A Miscue in Fingerprint Identification supra, at 288.  
The examiner based his opinion on 11 points of similarity that he had charted.  Id.  A second
fingerprint expert, also a board certified member of the IAI, confirmed the first examiner’s
finding, after being consulted by the defense.   Id.   Following the defendant’s conviction for
murder, however, it was definitively established that both of these certified fingerprint experts
had erred.   Caldwell, 322 N.W. 2d at 585.   The defendant’s conviction was accordingly
reversed.  Id.  
“ Perhaps the most astounding and colossal fingerprint identification error that has yet
been made occurred in England in 1997.”  See Starrs, Scientific Supermen at 244-245.  In that
case, two latent prints that had been recovered from a burglary crime scene were each found to
have at least sixteen points in common with two of Andrew Chiory’s  inked prints.  These
identifications, pursuant to standard Scotland Yard procedures, had been triple checked prior to
the defendant’s arrest.  After the defendant had spent several months in jail, however, the
identifications were found to be erroneous.
Professor Starrs also describes how the same tragedy had happened before in England. In
1991, Neville Lee had been arrested for the rape of an eleven year old girl because his
fingerprints matched with 16 points of comparison to those of the offender. The fingerprint error
was discovered only when another man confessed to the crime. Id. at 245.
15
                                                
15
For other documented cases of false identifications, see James E. Starrs, More
Saltimbancos on the Loose? -- Fingerprint Experts Caught in a Whorl of Error, 12 Sci. Sleuthing
52
Accordingly, it is beyond dispute that “(r)egardless of its verbal trappings the science of
fingerprint identifications is in no sense infallible, or flawless.” Starrs, Scientific Supermen at
243. The government’s own expert In Mitchell has acknowledged as much.   See David L.
Grieve, Reflections on Quality Standards, 16 Fingerprint Whorld 108, 110 (April 1990)( “It is
true that some overly zealous North American examiners have given testimony concerning false
identifications when they believed the identifications were valid.”).  What remains unknown,
however, is the rate at which misidentifications take place.  As commentators have recognized,
“it is difficult to glean information about cases of error because they rarely produce a public
record, and the relevant organizations and agencies tend not to discuss them publicly.”   Simon
A. Cole, Witnessing Identification: Latent Fingerprinting Evidence and Expert Knowledge, 28
Social Studies in Science 687, 701 (Oct.-Dec. 1998) .  Moreover, as discussed above, there have
been no controlled studies conducted so as to determine an error rate for latent print examiners.
“Unfortunately, although there is extensive collective experience among casework examiners,
there has been no systematic study such as that described above.” Stoney, Fingerprint
Identification, supra, § 21-2.1.2 at 66.  
Just how prevalent the problem of false identifications may actually be, however, can be
seen, at least to some extent, from the astonishingly poor performance of latent print examiners
on crime lab accreditation proficiency exams.  On these exams, latent print examiners are
typically provided with several latent prints along with a number of “ten print” inked impressions
to compare them with.   Commencing in 1995, the provider of the test, Collaborative Testing
Service, began to include, as part of the test, one or two “elimination” latent prints made by an
individual whose inked impressions had not been furnished.
The results of the 1995 exam were, in the words of the government’s expert in Mitchell
both “alarming” and  “chilling.”  Grieve, Possession of Truth, 46 J. Forensic Ident. 521, 524.   Of
                                                                                                                                                            
Newsl. 1 (Winter 1998) (detailing several erroneous identifications discovered in North Carolina
and Arizona); see also Dale Clegg, A Standard Comparison, 24 Fingerprint Whorld 99, 101 (July
1998)(“I am personally aware of wrong identifications having occurred under both ‘non
numeric’ and ‘16 point’ approaches to fingerprint identification.”).
53
the 156 examiners who participated, only 68 (44%) were able to both correctly identify the five
latent print impressions that were supposed to be identified, and correctly note the two
elimination latent prints that were not to be identified.  Even more significantly, 34 of these
examiners (22%) made erroneous identifications on one or more of the questioned prints for a
total of 48 misidentifications.  Id.   Erroneous identifications occurred on all seven latent prints
that were provided, including 13 errors made on the five latent prints that could be correctly
identified to the supplied suspects.  Id.   In addition, one of the two elimination latents was
misidentified 29 times.  Id.
“The results of the 1995 proficiency study... raise serious questions about the
trustworthiness of fingerprint analysis.”Paul Giannelli and Edward Imwinkelried, 1 Scientific
Evidence (3d. Ed 1999)§ 16-9(E), p. 784. These shockingly poor results, moreover, could not be
blamed on the test. In fact, as Professors  Giannelli and  Imwinkelried point out, “(a)n especially
troubling aspect of the test was that it was not blind, since the participating examiners were
surely on notice that they were being tested and such notice should have put them on their guard
to do their very best.” Id. at p. 741 n. 18.  The 1995 proficiency exam was recognized as being “a
more than satisfactory representation of real casework conditions.”  Grieve, Possession of Truth,
supra, at 524 .   The test was designed assembled and reviewed by representatives of the
International Association of Identification.  Id.   As Mr. Grieve correctly observed, a
“proficiency test composed of seven latents and four suspects was considered neither overly
demanding or unrealistic.”  Id.   Accordingly, the dreadful results are a matter of significant
concern.  As Mr. Grieve has written:
Reaction to the results of the CTS 1995 Latent Print Proficiency
Test within the forensic science community has ranged from shock
to disbelief.  Errors of this magnitude within a discipline singularly
admired and respected for its touted absolute certainty as an
identification process have produced chilling and mind-numbing
realities.  Thirty-four participants, an incredible 22% of those
involved, substituted presumed but false certainty for truth.  By
any measure, this represents a profile of practice that is
unacceptable and thus demands positive action by the entire
community.
54
Grieve, Possession of Truth, supra, at 524-25 (Ex. 9 at 524-25). 
Despite Mr. Grieve’s call for “positive action,” the poor results have continued unabated
on the more recent proficiency exams.  On the 1998 test, for example, only 58% of the
participants were able to correctly identify all of the latents and to recognize the two elimination
latents as being unidentifiable.  Collaborative Testing Services, Inc., Report No. 9808, Forensic
Testing Program: Latent Prints Examination 2 (1998) .  Even more disturbing was the fact that
21 erroneous identifications were made by 14 different participants.  Id.
16 
Having failed to address any of these proficiency tests in advancing its claim of a zero
error rate, the government In Mitchell  took the remarkable position that “practitioner error is not
relevant to the validity of the science and methodology under Daubert . . . .”  Government’s
Response to the Defendant’s Motion to Compel the Government to Produce Written Summaries
for All the Experts That It Intends to Call at the Daubert Hearing at 3 n.3.  The government,
however, failed to explain why practitioner error is irrelevant under Daubert.  Nor did the
government explain how an error rate for a particular technique may be assessed other than
through its real-life practitioners.    Not surprisingly, courts have, in fact, looked at studies of
examiner error rate in determining whether proffered “scientific” evidence is reliable.  See, e.g.,
United States v. Smith, 869 F.2d 348, 353-54 (7th Cir. 1989) (studies of “actual cases examined
by trained voice examiners” considered by court in deciding admissibility).  The Seventh
Circuit’s decision in Smith was, as noted above, cited with approval by the Supreme Court in
Daubert.  See Daubert, 509 U.S. at 594, 113 S. Ct. at 2797; People v. Leahy (1994) 8 Cal. 4th
587, 609 (To be qualified as a Kelly expert on an HGN test, witness must have “some
understanding of the processes by which alcohol ingestion produces nystagmus, how strong the
correlation is, how other possible causes might be masked, what margin of error has been shown
in statistical surveys, and a host of other relevant factors...”); See also Saks, supra, at 1090
                                                
16
On the 1997 exam, 16 false identifications were made by 13 participants.  Collaborative
Testing Services, Inc., Report No. 9708, Forensic Testing Program:  Latent Prints Examination 2
(1997) .  Six misidentifications were made on the 1996 exam.  Collaborative Testing Services,
Inc., Report No. 9608, Forensic Testing Program:  Latent Prints Examination 2 (1996) .
55
(“Even if forensic metaphysicians were right, that no two of anything are alike, for fact finders in
earthly cases, the problem is to assess the risk of error whatever its source, be that in the basic
theory or in the error rates associated with human examiners or their apparatus.”);  John
Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern
Scientific Evidence: The Law and Science of Expert Testimony § 20-6.2, p. 19 (“ Proficiency
testing is a means by which [reliability, validity, precision, and accuracy] can be
measured...Proficiency testing [is] the most appropriate means for the identification of sources of
error...”)..  Accordingly, the  argument that practitioner error rates are irrelevant is  without
merit.
In sum, any claim of a zero error rate is plainly at odds with reality.   While no controlled
studies have been done to determine an error rate, it would appear from the proficiency testing
done in the field that the rate is in fact substantial.   In this regard, it must be remembered that
under Kelly it is the government’s burden to establish the scientific reliability and general
acceptance of the expert evidence that it seeks to admit.   With respect to the error rate factor, the
government plainly has not met that burden.  See United States v. Starzecpyzel, 880 F. Supp.
1027, 1037 (S.D.N.Y. 1995) (“Certainly, an unknown error rate does not necessarily imply a
large error rate[;] [h]owever, if testing is possible, it must be conducted if forensic document
examination is to carry the imprimatur of ‘science.’”).
5.
There Are No Objective Standards to Govern Latent Fingerprint
Comparisons
  
Latent fingerprint examiners in the United States are currently operating in the absence of
any uniform objective standards.   The absence of standards is most glaring with respect to the
ultimate question of all fingerprint comparisons: What constitutes a sufficient basis to make a
positive identification?  As discussed above, the official position of the IAI, since 1973, is that
56
no minimum number of corresponding points of identification are required for an identification.
The SWGFAST Quality Assurance Guidelines of the FBI are in agreement. According to the
Introduction to the Guidelines, “(t)here is no scientific basis for requiring that a minimum
number of corresponding friction ridge features be present in two impressions in order to effect
an identification.” 
Instead, the determination of whether there is a sufficient basis for an identification is
left entirely to the subjective judgment of the particular examiner.  Indeed, in his recent book,
David Ashbaugh repeatedly stresses that “(t)he opinion of individualization or identification is
subjective.” Ashbaugh, Basic and Advanced Ridgeology at 103;See also, David Stoney,
Fingerprint Identification: Scientific Status, in 2 Modern Scientific Evidence: The Law and
Science of Expert Testimony § 21-2.1.2 at 65(“In fingerprint comparison, judgments of
correspondence and the assessment of differences are wholly subjective: there are no objective
criteria for determining when a difference may be explainable or not.”)    
While the official position of the IAI and SWGFAST, as supported by Mr. Ashbaugh, is
that there is no basis for a minimum point requirement, many fingerprint examiners in the United
States continue to employ either their own informal point standards or those that have been set
by the agencies that they work for.  Simon Cole, What Counts For Identity? The Historical
Origins Of
The Methodology Of Latent Fingerprint Identification, 12 Sci. In Context 1, 3-4
(Spring 1999)  [hereinafter Cole, What Counts For Identity?].  This variability of standards is
confirmed by Professors Giannelli and  Imwinkelried: “There is no consensus on the number of
points necessary for an identification. In the United States, one often hears that eight or ten
points are ‘ordinarily’ required. Some local police departments generally require 12 points.” Paul
Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§ 16-7(A), p. 768.
Prior to the IAI’s 1973 proclamation, the informal standard most commonly employed in
the United States was 12.  See FBI, Fingerprint Identification, supra, at 6. To this day, FBI latent
fingerprint experts testify that “(i)n the FBI latent fingerprint section, at present time, there is no
57
set number of points. However, we have an administrative rule which is on the books which
requires any latent print of less than 12 points of identity--and that being the dots, the end of
ridges or enclosures--requires supervisory approval before it can be reported in a report that it is
in fact an identification.” United States v. Timothy McVeigh, Testimony of Special Agent Louis
Hupp, Reporter’s Transcript of Proceedings, Vol. 68, April 29, 1997,
http://www.papillion.ne.us/mriddle/okctr/4-29-1.htm. See also, People v.  Clarence Powell, S. F.
Muni Ct. No. 167003, Testimony of Inspector Michael Byrne, Preliminary Hearing Transcript,
April 5, 1978, p. 70  (“Now, the San Francisco Police Crime Laboratory for years we have liked
to testify on 12 points...We stop at 12. We are completly satisfied at 12 but...that doesn’t mean
we will not testify on nine or eight or--I have never done it myself--I have testified to ten but I
don’t think I have gone to nine yet.”).  
In addition, while there is no uniform identification standard in the United States, “many”
other countries have, in fact, set such standards based on a minimum number of points of
comparison.  Ashbaugh, Basic and Advanced Ridgeology, supra, at 6-7. As indicated above, in
England, many examiners use 16 points as a rule of thumb and triple check the results. “In
France, the required number used most often is 24 while the number is 30 in Argentenia and
Brazil.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§ 16-7(A),
p. 768. Italy has a minimum standard of 17 matching ridge characteristics. Christophe Champod,
Numerical Standards and “Probable” Identifications, 45 J. of Forensic Identification 136, 138
(1995) .  The primary purpose of establishing such standards is to try to insure against erroneous
identifications.  K. Luff, The 16-Point Standard, 16 Fingerprint Whorld 73 (Jan. 1990) .See also,
Ashbaugh, Basic and Advanced Ridgeology, supra, at 102 (“[T]he static training threshold is an
acceptable practice as a safeguard and permits one to gain experience and confidence with a
reduced fear of committing an error.”). Such a standard is legally necessary to ensure “forensic
reliability” as that term is used in Venegas
58
As commentators have recognized, the question of whether there should be a minimum
point standard for latent print identifications has bitterly divided the fingerprint community.  See
Cole, What Counts For Identity, supra, at 1 .  While latent print examiners have somehow
managed to maintain a united front in the courtroom, they have been at odds in the technical
literature.  Id. at 6.  Mr. Ashbaugh, for example, has written that “it is unacceptable to use the
simplistic point philosophy in modern day forensic science.”  Ashbaugh, Premises, supra, at 513
.
17
  As Mr. Ashbaugh has correctly recognized, the selection of any particular point standard is
based, not on scientifically conducted probability studies, but “through what can best be
described as an ‘educated conjecture’.”  Ashbaugh, Basic and Advanced Ridgeology, supra, at 2;
see also Ashbaugh, Premises, supra, at 512  (stating that “superficial and unsubstantiated quips
became the methodology of the point system”).   
The problem, however, is that while Mr. Ashbaugh is correct that the point system, as
employed by fingerprint examiners over the past hundred years, is scientifically invalid, neither
Mr. Ashbaugh, nor any other member of the fingerprinting community, has advanced a
scientifically sound alternative.   Here, for example, is Mr. Ashbaugh’s explanation as to how a
latent print examiner, in the absence of a minimum point standard, is supposed to know when a
sufficient basis exists to make an identification:
A frequently asked question is how much is enough?   The opinion
of individualization or identification is subjective.  It is an opinion
formed by the friction ridge, based on the friction ridge formations
found in agreement during comparison.  The validity of the
opinion is coupled with an ability to defend that position, and both
are founded in one’s personal knowledge ability and experience    
***
How much is enough? Finding adequate friction ridge formations
in sequence, that one knows are specific details of the friction skin,
and in the opinion of the friction ridge identification specialist
there are sufficient uniqueness within those details to eliminate all
other possible donors in the world, is considered enough.  At that
point individualization has occurred and the print has been
identified.  The identification was established by the agreement of
friction ridge formations, in sequence, having sufficient uniqueness
to individualize.
                                                
17
Of course, the identification in the instant case appears to have been made by Ms.
Chong on just such a simplistic counting of points.
59
Ashbaugh, Basic and Advanced Ridgeology, supra, at 103.
The utter meaninglessness of this explanation speaks for itself.   Mr. Ashbaugh’s prior
writings on this subject provide little in the way of additional insight.  He has stated, for
example, that while “in some instances we may form an opinion on eight ridge characteristics [,]
[i]n other instances we may require twelve or more to form the same opinion.”  David Ashbaugh,
The Key to Fingerprint Identification, 10 Fingerprint Whorld 93, 93 (April 1985) .  Mr.
Ashbaugh’s explanation for this sliding scale is that some ridge characteristics are more unique
than others.  Id. at 94, 95.  But, as discussed above, no weighted measures of the different
characteristics have ever been adopted by the fingerprint community. As California Department
of Justice fingerprint expert Dusty Clark has explained, “(t)he repeatability of the finite detail
that is utilized in the comparison process has never been subjected to a definitive study to
demonstrate that what is visible is actually a true 3rd level detail or an anomaly...Ridgeology
hasn't been scientifically proven to be repeatable, and it's application is not standardized.” Dusty
Clark, What’s The Point (Dec. 1999), http://www.latent-prints.com/id_criteria_jdc.htm.  
Accordingly, as Mr. Ashbaugh has recognized, the particular examiner’s determination of
whether eight or twelve matching characteristics is sufficient in a particular case is entirely
“subjective.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 103. But as Mr. Clark again
points out, “”(a) subjective analysis without quantification makes the identification process as
reliable as astrology. If one does not quantify, is it an ID when a warm and fuzzy feeling
overwhelms you? What happens if my warm and fuzzy feeling is different that yours?...” Id.
Ashbaugh and others place principle reliance on the experience and training of the
analysist as a hedge against erroneous results. However, as indicated above, Evett and Williams
found in an extensive collaborative study that “(s)tatistical analysis did not suggest any
association between the number of [correct] identifications made by an expert and his/length of
experience.” I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard
in England and Wales, (1996) 12(1) The Print 1, 7. In their study, the FBI and other North
60
American experienced experts were sent 10 sets of samples, only 6 of which should have
resulted in a court quality identification and the tenth of which came from two differant
individuals. Significantly, “ four experts at the FBI were unanimous in deciding that there were 9
court quality identifications, the tenth comparison being not identical.Most of the north
American experts decided on 8 or 9 full identifications.” id. at 8. This study perfectly illustrates
the truth of Dr. John Thornton’s observation that
(S)ome experts exploit situations where intuitions or mere suspicions can be
voiced under thr guise of experience. When an expert testifies to an opinion, and
bases that opinion on”years of experience”, the practical result is that the witness
is immunized against effective cross examination. When the witness testifies that
“ I have never seen another similar instance in my 26 years of experience...,” no
real scrutiny of the opinion is possible. No practical means exists for the
questioner to delve into the extent or quality of that experience. Many witnesses
have learned to invoke experience as a means of circumventing the responsibility
of supporting an opinion with hard facts. For the witness, it eases cross-
examination. But it also removes the scientific basis for the opinion.
Experience is neither a liability nor an enemey of the truth; it is a valuable
commodity, but it should not be used as a mask to deflect legitimate scientific
srutiny, the sort of scrutiny that customarily is leveled at scientific evidence of all
sorts. To do so is professionally bankrupt and devoid of scientific legitimacy, and
courts would do well to disallow testimony of this sort. Experience ought to be
used to enable the expert to remember the when and the how, why, who, and
what. Experience should not make the expert less responsible, but rather more
responsible for justifying an opinion with scientific facts.
John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern
Scientific Evidence: The Law and Science of Expert Testimony § 20-5.5, p. 17.
  
The lack of uniform standards for latent print comparisons extends well beyond the
question of what ultimate standard should apply for a positive identification.  Objective standards
are lacking throughout the entire comparison process.   Take for example, the simple issue of
how points of similarity should be counted.  When examiners find themselves struggling to reach
a certain point criteria, they often engage in a practice known as “pushing the mark.”  Clegg,
supra, at 99 .   Pursuant to this practice, a single characteristic, such as a short ridge, is counted
not as one point, but rather as two separate ridge endings.  Id.  Or,  a single enclosure is counted
as two bifurcations.  See, Robert Olsen, Friction Ridge Characteristics and Points of Identity: An
Unsolved Dichotomy of Terms, 41 J. Forensic Identification 195(1991)(IAI has  declared in a
61
formal report that an enclosure should be counted as a single point rather than as two separate
bifurcations.).  While the IAI has declared that points should not be counted in this fashion, it is
nevertheless commonly done, as can be seen by the work of the FBI examiner in the Mitchell
case, where an enclosure was counted as two bifurcations .  The obvious danger of this practice,
as one examiner has candidly recognized, is its “potential to generate error . . . .” Clegg, supra, at
101.  
  The lack of objective standards in fingerprint comparisons can also be seen with respect
to the so called “one dissimilarity rule.”  See John I. Thornton, The One-Dissimilarity Doctrine
in Fingerprint Identification, 306 Int’l Crim. Police Rev. 89 (March 1977) .  Pursuant to this
doctrine, if  two fingerprints contain a single genuine dissimilarity then the prints cannot be
attributed to the same finger or individual.   Id.  This doctrine is well recognized in the
fingerprint community and has been endorsed in the writings of the government’s own experts. 
David Ashbaugh, Defined Pattern, Overall Pattern and Unique Pattern, 42 J. of Forensic
Identification 505, 510 (1992)  [hereinafter Ashbaugh, Defined Pattern].  The doctrine, however,
is effectively ignored in practice.   As Dr. Thornton has recognized, once a fingerprint examiner
finds what he or she believes is a sufficient number of matching characteristics to make an
identification, the examiner will then explain away any observed dissimilarity as being a product
of distortion or artifact:
Faced with an instance of many matching characteristics and one
point of disagreement, the tendency on the part of the examiner is
to rationalize away the dissimilarity on the basis of improper
inking, uneven pressure resulting in the compression of a ridge, a
dirty finger, a disease state, scarring, or super-imposition of the
impression.  How can he do otherwise?  If he admits that he does
not know the cause of the disagreement then he must immediately
conclude that the impressions are not of the same digit in order to
accommodate the one-dissimilarity doctrine.  The fault here is that
the nature of the impression may not suggest which of these
factors, if any, is at play.  The expert is then in an embarrassing
position of having to speculate as to what caused the dissimilarity,
and often the speculation is without any particular foundation.
The practical implication of this is that the one-dissimilarity
doctrine will have to be ignored.  It is, in fact, ignored anyway by
virtue of the fact that fingerprint examiners will not refrain from
62
effecting an identification when numerous matching characteristics
are observed despite a point of disagreement.  Actually, the one-
dissimilarity doctrine has been treated rather shabbily.  The
fingerprint examiner adheres to it only until faced with an
aberration, then discards it and conjures up some fanciful
explanation for the dissimilarity. 
Thornton, supra, at 91.
Dr. Thornton has also noted an additional problem which plagues those few police
departments which adhere to an illusory standard of eight points of identification. As he explains,
under this rationale
(E)ight matching characteristics, if they are clear and unambigious, will serve for
purposes of identification. A problem, however, is that if the evidence print can be
gleaned for no more than eight characteristics, it is likely that the print suffers
from some lack of clarity. Evidence fingerprints that possess only eight
characteristics, but with those eight charecteristics being brillant and unequivocal,
are not commonly encountered. So at the same time that the criterion for
identification is being relaxed, the ambiguity of each charecteristic is being
augmented.
John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern
Scientific Evidence: The Law and Science of Expert Testimony § 20-9.2.5, p. 31.
The absence of real standards in the fingerprint field also can be seen with respect to the
issue of verification.   Independent verification is considered an essential part of the
identification process.See, SWGFAST Quality Assurance Guidelines, Guideline 1.1 (“ All
identifications must be verified by a qualified latent print examiner.”).   But, in real practice,
fingerprint agencies sometimes “waive the verification requirement.”   William Leo,
Identification Standards - The Quest for Excellence,  Cal. Identification Dig. (December 1995) .  
Moreover, as revealed by one of the government’s experts in the Mitchell case, some examiners
will simply go from one supervisor to another until a desired verification is obtained.   Pat
Wertheim, The Ability Equation, 46 J. of Forensic Identification 149, 153 (1996) .    Mr.
Wertheim candidly recounts in this article his experience of shopping for a supervisor so as to
obtain the positive verification that he believed was warranted.  Id. 
63
More subtle, but no more scientifically acceptable, is the verification process used in this
case. Ms. Chong testified that after she made her identification she wrote up a report and gave it
to Ken Moses who was then asked to verify her results. (RT 254). The obvious problem is that
Mr. Moses was given access to his colleague’s report before he was asked to do the verification.
Even Mr. Asbaugh condemns such a biasing process.Ashbaugh, Basic and Advanced
Ridgeology, supra, at 108 (“The latent print is always analyzed first, before comparison to the
exemplar. This rule ensures an uncontaminated analysis of the unknown friction ridge detail.
Comparisons conducted in this fashion ensure objectivity and prevent contamination through
previous knowledge.”). See also, See, Y. Mark and D. Attias, What Is the Minimum Standard for
Characteristics for Fingerprint Identification (1996) Fingerprint Whorld 148(“We wish to
emphasize that the determination of a positive identification by one of our experts is made
independently from other experts and from the circumstances of the case.”) Violation of this
principle no doubt explains how two separate misidentifications were made in England, despite
the presence of triple verification. (See supra at 49-50).
Finally, the lack of standards in the fingerprint community extends to the training and
experience requirements for latent print examiners.  To put it simply, no such requirements
currently exist.   See Leo, supra  (recognizing need for “minimum training and experience
standards” for latent print examiners).  As one of the government’s experts in Mitchell has
recognized, “people are being hired directly into latent print units without so much as having
looked at a single fingerprint image.”  Wertheim, supra, at 152 (Ex. 41 at 152).   Once hired, the
training that examiners receive is typically minimal.   Consider what government expert David
Grieve has said on the subject of training:
The harsh reality is that latent print training as a structured,
organized course of  study is scarce.  Traditionally, fingerprint
training has centered around a type of apprenticeship, tutelage, or
on-the-job training, in its best form, and essentially a type of self
study, in its worst.   Many training programs are the “look and
learn” variety, and aside from some basic classroom instruction in
pattern interpretation and classification methods, are often
impromptu sessions dictated more by the schedule and duties of
the trainer than the needs of the student.  Such apprenticeship is
64
most often expressed in terms of duration, not in specific goals and
objectives, and often end with a subjective assessment that the
trainer is ready.
David L. Grieve, The Identification Process: The Quest For Quality, 40 J. of Forensic
Identification 109, 110-111 (1990) .
As Mr. Grieve has recognized, the direct result of this poor training is deficient
examiners.  “The quality of work produced is directly proportional to the quality of training
received.”  Id.  See also David L. Grieve, The Identification Process: Traditions in Training, 40 J.
of Forensic Identification 195, 196 (1990)  (that there are “examiners performing identification
functions who are not qualified and proficient . . . unfortunately has been too well established”);
Robert D. Olsen, Cult of the Mediocre, 8 Fingerprint Whorld 51 (Oct. 1982)  (“There is a
definite need for us to strengthen our professional standards and rise above the cult of the
mediocre.”).
A final example of the lack of standards is the alleged requirement of annual proficiency
testing. SWGFAST Quality Assurance Guideline 7 provides that “(a) proficiency test should be
administered to each latent print examiner annually.” Similarly, the San Francisco Police
Department’s own lab policy manual provides “all C.S.I.(Crime Scene Investigation) staff who
report fingerprint comparisons shall complete at least one proficiency test each year.”(RT 232).
Yet, since she first starting to do fingerprint work in 1969, Ms. Chong had taken only two
proficiency tests, one about fifteen to seventeen years ago and the second about five years
ago.(Id.) She had never even heard of her own Department’s requirement for annual testing.
(Id.). According to Chong, “ The earlier test was an “ FBI TEST THAT TIME, I TOOK IT AND
I DID THE LATENT COMPARISON WITH I THINK THE HAND PRINT INSTEAD OF
THE FINGERPRINT THAT TIME, AND THEN LATER THE ANSWER WAS SCORE AS
WRONG THAT TIME BUT THEN CHIEF TOM MURPHY... RECORRECTED MY WORK,
THEN HE SAID I WAS RIGHT AT THE END.” (RT 231). Such testimony makes a mockery of
65
the dubious theory that the training and proficiency requirements of the profession have done
away with the need for an objective  standard of analysis. 
Moreover, the lack of training and standards has not only resulted in a plethora of
deficient examiners, but dishonest ones as well.  New York police officers have fabricated
fingerprint evidence in numerous cases. See Mark Hansen, Trooper's Wrongdoing Taints Cases,
A.B.A. J., Mar. 1994, at 22; Ronald Sullivan, Trooper's 2d Tampering Charge, N.Y. Times, Jan.
6, 1994, at B9  This fiasco came to light when a New York State policeman bragged in a CIA
interview about his fabrication skills. In January 1991, the CIA passed the information on to the
FBI. It took over a year, however, for an investigation to be commenced. The special prosecutor
found that up to forty cases may have been tainted, and he "wonder[ed] why more prosecutors in
the region didn't grow suspicious about the sudden avalanche of good fingerprint evidence."
Gary Taylor, Fake Evidence Becomes Real Problem, Nat'l L.J., Oct. 9, 1995, at A1, A28. One of
the experts in the Mitchell case, Pat Wertheim, estimates that there have been “hundreds or even
thousands” of cases of forged and fabricated latent prints.  Pat Wertheim, Detection of Forged
and Fabricated Latent Prints, 44 J. of Forensic Identification 653, 675 (1994)  (“A disturbing
percentage of experienced examiners polled by the author described personal exposure to at least
one of these cases during their careers.”).
In sum, latent print examiners operate without the benefit of any objective standards to
guide them in their comparisons.  There also are no objective standards or minimum
qualifications with respect to their hiring, training, and proficiency testing.  Accordingly, another
indicia of good science is critically lacking in this case.
6.
There Is No General Consensus That Fingerprint Examiners Can Reliably
Make Identifications on the Basis of Ten Matching Ridge Characteristics
As indicated at the outset of this motion, the relevant question in this case is not whether
entire fingerprints are unique and permanent, but whether there is a general consensus that
fingerprint examiners can make reliable identifications on the basis of only a limited number of
Previous page Top Next page