Navigation bar
  Start Previous page  4 of 7  Next page End  1 2 3 4 5 6 7  

36
characteristics identified by the fingerprint examiner in the case at bar, and second, that latent
fingerprint examiners can reliably make identifications from small distorted latent fingerprint
fragments that reveal only a limited number of basic ridge characteristics.  
That these premises have not been empirically validated has, in the wake of Daubert,
been repeatedly recognized by forensic science experts.   See, United States Department of
Justice, Forensic Sciences: Review of Status and Needs (1999), p. 29 (“How can examiners
prove that each individual has unique fingerprints? There are certainly statistical models that
support this contention. Friction ridge print evidence has historically been ‘understood’ to hold
individuality based on empirical studies of millions of prints. However, the theoretical basis for
this individuality has had limited study and needs a great deal more work to demonstrate that
physiological/developmental coding occurs for friction ridge detail, or that this detail is purely an
accidental process of fetal development. Studies to date suggest more than an accidental basis for
the development of print detail, but more work is needed.”); Paul Giannelli and Edward
Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§ 16-9, p. 784 (“The criteria used by
examiners are ‘the product of probabilistic intuitions widely shared among fingerprint examiners,
not of forensic research’”.) Michael J. Saks, Merlin and Solomon: Lessons from the Law’s
Formative Encounters With Forensic Identification Science, 49 Hastings L.J. 1069, 1105-06
(1998)  (“Although in principle fingerprint identification depends upon an objective,
probabilistic inquiry, its practitioners use no probability models and have no probability data to
use[;] they rely on intuitions and assumptions that have not been tested rigorously . . . .);
Margaret A. Berger, Procedural Paradigms For Applying the Daubert Test, 78 Minn. L. Rev.
1345, 1353 (1994)  (“Considerable forensic evidence [such as fingerprinting] made its way into
the courtroom without empirical validation of the underlying theory and/or its particular
application.”).   
The lack of testing has also been recognized by those within the fingerprint community. 
Dr. David Stoney, a leading scholar and fingerprint practitioner, has written: 
37
[T]here is no justification [for fingerprint identifications] based on
conventional science: no theoretical model, statistics or an
empirical validation process. 
Efforts to assess the individuality of DNA blood typing make an
excellent contrast.   There has been intense debate over which
statistical models are to be applied, and how one should quantify
increasingly rare events.   To many, the absence of adequate
statistical modeling, or the controversy regarding calculations
brings the admissibility of the evidence into question.   Woe to
fingerprint practice were such criteria applied!  As noted earlier,
about a dozen models for quantification of fingerprint individuality
have been proposed.   None of these even approaches theoretical
adequacy, however, and none has been subjected to empirical
validation. . . . .  Indeed, inasmuch as a statistical method would
suggest qualified (non-absolute ) opinions, the models are rejected
on principle by the fingerprint profession. 
****
Much of the discussion of fingerprint practice in this and preceding
sections may lead the critical reader to the question “Is there any
scientific basis for an absolute identification?”  It is important to
realize that an absolute identification is an opinion, rather than a
conclusion based on scientific research.  The functionally
equivalent scientific conclusion (as seen in some DNA evidence)
would be based on calculations showing that the probability of two
different patterns being indistinguishably alike is so small that it
asymptotes with zero . . . . The scientific conclusion, however,
must be based on tested probability models.   These simply do not
exist for fingerprint pattern comparisons.
David Stoney, Fingerprint Identification, in Modern Scientific Evidence: The Law and Science
of Expert Testimony § 21-2.3.1, at 72 (David L. Faigman et al. eds., West 1997) .  
The lack of testing in the fingerprint field also is reflected in an official report that the
International Association for Identification (“IAI”) issued in 1973.   The IAI had three years
earlier formed a “Standardization Committee” for the purpose of determining “the minimum
number of friction ridge characteristics which must be present in two impressions in order to
establish positive identification.”  International Association for Identification, IAI
Standardization Committee Report 1 (1973)  .   After three years of examining the issue,
however, the Committee was unable to provide a minimum number.  Instead, the IAI issued a
Report declaring that “no valid basis exists for requiring a predetermined minimum number of
friction ridge characteristics which must be present in two impressions in order to establish
38
positive identification.”  Id. at 2.   Of course, the reason that the IAI did not have a “valid” basis
to set a minimum number was that no scientific testing as to this issue had ever been performed.  
See Stoney, supra, (Ex. 15 at 71) (“Indeed, the absence of valid scientific criteria for establishing
a minimum number of minutiae has been the main reason that professionals have avoided
accepting one.”).  The IAI effectively conceded as much when it strongly recommended in the
Report that “a federally funded in depth study should be conducted, in order to establish
comprehensive statistics concerning the frequency, type and location of ridge characteristics in a
significantly large database of fingerprint impressions.    To date, however, no such research has
been conducted.
Perhaps the strongest proof regarding the lack of empirical testing comes directly from
the government’s submission in United States v. Byron C. Mitchell (E.D. Pa. 1999). Despite
having had months to prepare this submission, and despite having consulted with numerous
fingerprint “experts” from around the world, the government was unable to point to any relevant
scientific testing concerning either of the two fundamental premises upon which the fingerprint
identification in this case is based.   Instead, the government  referred only to certain embryology
studies that have traced the fetal development of fingerprints and to certain “twin” studies which
have demonstrated that twins possess different fingerprints.  Government’s Combined Report To
The Court And Motions In Limine Concerning Fingerprint Evidence (hereinafter Gov’t Mem.) at
15-16, 18-19, http://www.usao-edpa.com/daubert.html.  These studies, however, demonstrate,
at most, that fingerprints are subject to random development in the embryo and that the
individual ridge characteristics are not genetically controlled; they do not address the
fundamental premises at issue here -- the likelihood that prints from different people may show a
limited number of ridge characteristics in common, and the ability of latent print examiners to
make accurate identifications from small distorted latent fingerprint fragments.
The government also pointed in its memorandum to certain theoretical statistical claims
that have been made with respect to the probability of two different people having entire
39
fingerprint patterns in common.  (See Gov’t Mem. at 21.) (citing Francis Galton, Fingerprints
110 (1892)  and Bert Wentworth, Personal Identification 318-20 (1932)).  These theoretical
models, however, have been severely criticized and, more importantly, they have never been
empirically tested.   See Stoney, supra,  at 72. (“As noted earlier, about a dozen models for
quantification of fingerprint individuality have been proposed[;] none of these even approaches
theoretical adequacy, however, and none has been subjected to empirical validation.”).  See also
Stoney & Thorton, supra; I. W. Evett and R.L. Williams, A Review of the Sixteen Point
Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1,6,
http://www.scafo.org/library/120101.html (“It is tempting to believe that the problem of deciding
on a numerical standard for identification can be solved by statistical models.. . However, it is
recognized by all that such arguments are overly simplistic.”).  Accordingly, the “models
[referred to by the government] occupy no role in the . . . professional practice of fingerprint
examination”. Stoney, Fingerprint Identification, supra, § 21-2.3.1 at 72 (“Indeed, inasmuch as a
statistical method would suggest qualified (non-absolute) opinions, the models are rejected on
principle by the fingerprint profession.”).
11 
                                                
11
The inadequacies of the models referred to by the government are readily evident.   For
example Mr. Wentworth states:
There is, however, in all of these problems involving chance, an
important factor which in our present lack of precise knowledge
we have to assume; and that is the exact, or even approximate,
percentage of occurrences of the different details. . . . .  We find in
the fingerprint in question a fork, opening downward. . . . .  We
have no definite data for knowing the percentage of occurrence of
this detail . . . but the variability of the ridges and their detail is so
great that we may be warranted in asserting that it is small.
Bert Wentworth & Harris H. Wilder, Personal Identification (2d ed. 1932) at 318. 
Another problem concerns the lack of empirical proof that the ridge details are
statistically independent of one another. Two scientists studying this problem in the field of
biometrics have pointed out that
“(t)he underlying assumption made (in the statistical models) is that the content
of each cell is a random variable which is independent of all other cells. The
implication is that any configuration of the same set of features has the same
probability of occurrence meaning, for instance, that a tightly clustered pack of
minutiae is just as likely as the same set of minutiae geing distributed uniformly
40
That the theoretical statistical models referred to by the government in Mitchell provide
no scientific basis for latent finger print identifications can also be seen from the writing of the
government’s own expert David Ashbaugh.   In his new  book on the subject of fingerprints, Mr.
Ashbaugh does not even refer to any of these theoretical models, though one of Mr. Ashbaugh’s
stated goals in writing the book is to “address the scientific . . . basis of the identification
process.”  Ashbaugh, Basic and Advanced Ridgeology, supra  at 8-9.
12
   Moreover, Mr.
Ashbaugh acknowledges that there is currently no basis to provide opinions of probability with
respect to fingerprints.  Id. at 147 (“ The so-called probability identifications of friction ridge
prints is extremely dangerous, especially in the hands of the unknowing...Extensive study is
necessary before this type of probability opinion could be expressed with some degree of
confidence and consistency . . . .”). Asbaugh’s own theory of uniqueness based on “poroscopy”
has been disproven by biometric scientists. See, A.R.Roddy and J.D. Stosz, Fingerprint
Features- Statistical Analysis and System Performance Estimates, from The Proceedings of the
Institute of Electrical and Electronics Engineering, Sept. 1997, Vol. 85, No. 9, pp., 18, 25,
http://www.biometrics.org/REPORTS/IEEE_pre.pdf (“Ashbaugh...contends that pore pods
occur regularly, but the position of the pore within the pod is a random variable. In addition, he
assumes independence between pores...(T)he underlying assumption of independence makes
uniqueness calculations possible. In reality, though, the independence assumption is not accurate.
There appears to be a definite influence on a pore’s position depending on the relative positions
                                                                                                                                                            
over the print. Although the (model) gives meaningful results, empirically the
independence assumption is not valid because some configerations of Galton
features are much less likely than others.
A.R.Roddy and J.D. Stosz, Fingerprint Features- Statistical Analysis and System Performance
Estimates, from The Proceedings of the Institute of Electrical and Electronics Engineering,  Sept.
12
Mr. Ashbaugh, like the government, points to the embryology studies as providing a
scientific basis for fingerprint identifications.  Ashbaugh, Basic and Advanced Ridgeology, supra 
at 8, 38-54.  Like the government, though, Mr. Ashbaugh fails to explain how these studies relate
to the fundamental premises that underlie latent fingerprint identifications.
41
of the neighboring pores. If the independence assumption is not valid, then the assumption that
all possible configurations of N pores are equally likely is also not valid.”).
The lack of empirical testing that has been done in the field of fingerprints is devastating
to any claim that latent fingerprint identifications are scientifically based or generally accepted as
reliable.  See
Daubert, 509 U.S. at 593, 113 S.Ct. at 2796 (“Scientific methodology today is
based on generating hypotheses and testing them to see if they can be falsified; indeed, this
methodology is what distinguishes science from other fields of human inquiry.”) (internal
quotations and citations omitted); People v. Soto, 21 Cal. 4th at 540(The debate regarding the
effect of population substructuring on RFLP calculations was only resolved empirically by
“extensive literature in peer reviewed journals.”). The lack of testing, moreover, deprives latent
fingerprint comparisons from having true evidentiary significance.  Because of  the lack of
testing, a latent fingerprint examiner can, at best, correctly determine that a certain number of
ridge characteristics are in common in the two prints under comparison; the examiner, however,
has no basis to opine what the probability is, given the existence of these matching
characteristics, that the two prints were actually made by the same finger.   Instead, as discussed
further below, the latent print examiner can provide only a subjective opinion that there is a
sufficient basis to make a positive identification.
  The  necessity of  being able to provide statistically sound probabilities has been
recognized in the analogous area of DNA.   See, People v. Venegas, 18 Cal.4th at 82 (“A
determination that the DNA profile of an evidentiary sample matches the profile of a suspect
establishes that the two profiles are consistent, but the determination would be of little
significance if the evidentiary profile also matched that of many or most other human beings.
The evidentiary weight of the match with the suspect is therefore inversely dependent upon the
statistical probability of a similar match with the profile of a person drawn at random from the
relevant population.); People v. Wallace (1993) 14 Cal. App. 4th 651, 661 n.3  (stating that
without valid statistics DNA evidence is “meaningless”); People v. Barney (1992) 8 Cal. App.
42
4th 798, 802 (“The statistical calculation step is the pivotal element of DNA analysis, for the
evidence means nothing without a determination of the statistical significance of a match of
DNA patterns.”); (1991) People v. Axell, 235 Cal. App. 3d 836, 866 (“We find that . . . a match
between two DNA samples means little without data on probability. . .”).
13
As forensic scientist
Dr. John Thornton has noted, “ DNA analysts seemed to have embraced the premise that they
had best be very careful with their statistics, because, if they aren’t, their work will be rejected. If
this paradigm becomes the standard, then many other evidence categories, where statistical
underpinnings have yet to be developed, are in deep trouble.” John Thornton, The General
Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The
Law and Science of Expert Testimony § 20-9.2.1, p. 25 (D. Faigman, ed. 1997).
2.
The First Premise Of The Government’s Fingerprint Identification Evidence Not
Only Has Not Been Tested, It Has Been Proven False
The first major premise of the government’s fingerprint identification evidence -- that it
is impossible for fingerprints from two or more people to have as many as ten basic ridge
characteristics in common -- has not only not been scientifically tested, it has been proven false
by anecdotal evidence.   As noted above,  cases have been documented in which different
individuals have shared 10 and even 16 points of similarity. In England, a 16 point standard was
adopted after it was discovered that prints from two different individuals shared from 10 to 16
points of similarity. I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint
Standard in England and Wales, (1996) 12(1) The Print 1,4,
                                                
13
As the British physicist William Thomson, Lord Kelvin, observed in 1883:
When you can measure what you are speaking about, and express
it in numbers, you know something about it; but when you cannot
measure it, when you cannot express it in numbers, your
knowledge is of a meager and unsatisfactory kind:  it may be the
beginning of knowledge, but you have scarcely, in your thoughts,
advanced to the stage of science.
(quoted in United States v. Starzecpyzel, 880 F. Supp. 1027 (S.D.N.Y. 1995)).
43
14
. Even matches that are based on 16 points of
comparison and that have been verified by a second or third analyst have been shown to be in
error. See, James E. Starrs, Judicial Control Over Scientific Supermen: Fingerprint Experts and
Others Who Exceed The Bounds, (1999) 35 Crim. L. Bull. 234, 243-246 (describing two cases in
England in 1991 and 1997 in which misidentifications were made despite the fact that the British
examiners insist on 16 points for an identification and triple check fingerprint
identifications);Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§
16-1, p. 740-741(discussing same cases). As Giannelli and Imwinkelread conclude, “(f)ingerprint
identification is not as infallible as many laypersons (and experts) assume it to be.” Id.    
       
Unfortunately, however, findings such as these have not been taken into consideration in
determining criteria for the identification of fingerprints in the United States.  As discussed
further below, there is currently no minimum standard for latent fingerprint identifications in this
country, and, as can be seen from the testimony of Ms. Chong each examiner is free to arbitrarily
set his or her own minimum threshold and then to declare with absolute certainty that the latent
and known print came from the same source. Most telling in this regard is Evett and William’s
observation that “(e)xperts [in Britian] appeared to have a particularily poor regard for the
fingerprint profession in the USA where there is no national standard. Cases of wrongful
identification which had been made by small bureaus in the USA were cited as being
                                                
14
Prints from two different individuals were originally determined to have 16 points of
similarity by New Zealand experts, but “(w)hen the illustration was examined at New Scotland
Yard, it was concluded that 6 of the points were not close enough to be considered similarities
but the remaining 10 were.” Id. at 2. Evett and Williams now claim that certain of the points of
similarity are fabricated, but if this is true, then one can only question how two preeminent
organizations missed what Evett and Williams call “ patent” fabrications. Id. p. 9. Moreover, the
authors point out that “(d)uring meetings with U.K. fingerprint officers the team heard, in
support of the 16 point standard, antedotes-often second hand-of how experts had seen more than
8 points of comparison in prints from different individuals.” Id. at 9. In any case, the Evett and
Williams study was done before two documented cases of 16 points of comparison were
discovered in 1991 and 1997.
44
symptomatic of a poor system and the dominant view was that such unfortunate events would
not have occurred had there been a 16 points standard in operation” A Review of the Sixteen
Point Fingerprint Standard at 4.   The potential for error is thus significant, especially given that
distortion or even fabrication can cause ridge characteristics from two different prints to appear
the same, when in reality they are not.   
3.
The Testing Conducted by the FBI in United States v. Mitchell for the Purposes
of  Litigation Fails To Demonstrate Scientific Reliability
Recognizing the lack of testing and scientific research that has been done by the
fingerprint community during the last 100 years, the government in United States v. Mitchell
desperately attempted  to make up for this deficiency.   The government’s rushed efforts,
however, have been far from successful.  
As discussed above, one test the government conducted was to send the two latent prints
at issue in Mitchell’s case, along with Mr. Mitchell’s inked prints, to 53 different law
enforcement agencies.   The government requested that the agencies select “court qualified”
examiners to compare the prints and to determine whether any identifications could be made. . 
This experiment is, in fact, relevant to the second fundamental premise at issue in this case --
whether latent print examiners can reliably make identifications from small latent print fragments
-- as it indicates whether different examiners can, at least, be expected to reach the same
conclusions when they are presented with the same data.
The results of this test, however, constitute an unmitigated disaster from the
government’s perspective, as can be seen from the fact that the test is nowhere mentioned in the
government’s first memorandum to the Court.  While the results of the test can be found in the
Mitchell government exhibit 6-4 , this exhibit does not reveal that the prints utilized in the test
are the very prints at issue in Mitchell.  The reason for this omission is clear.  Of the 35 agencies
that responded to the government’s request, eight (23%) reported that no identification could be
made with respect to one of the two latents and six (17%) reported that no identification could be
made as to the other. See, Memorandum Of Law In Support Of Mr. Mitchell’s Motion To
45
Exclude The Government’s Fingerprint Identification Evidence, p. 21( hereinafter
“Memorandum In Support”) http://www.onin.com/fp/fphome.html .  The test thus dramatically
reveals  how subjective latent print comparisons actually are and how unreliable their results can
be.
The People can hardly contend in this regard that the participating agencies did not
appreciate the extreme importance of the comparisons that they were being asked to perform. 
The government’s cover letter to the agencies provided:
The FBI needs your immediate help!  The FBI laboratory is
preparing for a Daubert hearing on the scientific basis for
fingerprints as a means of identification.   The Laboratory’s
Forensic Analysis Section Latent Print Unit, is coordinating this
matter and supporting the Assistant United States Attorney in
collecting data needed to establish this scientific basis and its
universal acceptance.
*** 
The time sensitive nature of these requests cannot be expressed
strongly enough, nor can the importance of your cooperation.   The
potential impact of the Federal court not being convinced of the
scientific basis for fingerprints providing individuality has far-
reaching and potentially negative ramifications to everyone in law
enforcement.   The FBI wishes to present the strongest data
available in an effort to insure success in this legal mater and your
cooperation is a key component in achieving this result.
  
The People also cannot attribute the results of this test to the fact that the fingerprint
comparisons were performed by inexperienced examiners.  Consistent with the urgency of the
government’s cover letter, each of the state law enforcement agencies that did not find a
sufficient basis to make an identification selected extremely experienced examiners to make the
comparisons.  As set forth in the Memorandum In Support at p. 21, the range of experience for
this group of examiners is between 10 and 30 years, with the average amount of experience
being 20 years.  In addition, virtually all of these examiners are board certified members of the
IAI, the highest distinction that a latent print examiner can achieve.  Id.   Accordingly, that this
particular group of examiners did not find a sufficient basis to make an identification on either
one or both of the latent prints at issue in this case is devastating to the government’s claim of
46
scientific reliability. See also,  I. W. Evett and R.L. Williams, A Review of the Sixteen Point
Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1, 7 (“Statistical analysis [of
an extensive collaborative study] did not suggest any association between the number of
[correct] identifications made by an expert and his/length of experience.”).  
Apparently recognizing just what this test really means to its case against Mr. Mitchell,
the government next took the remarkable step of attempting to eradicate the test results.   The
government  asked each of the agencies that did not make an identification to retake the test, but
this time the government  provided the agencies with the answers that the government believed
to be correct.  Along with a new response form, the government  sent each of these agencies
enlargements of the prints at issue displaying what the government apparently believed were the
common characteristics.   The government’s cover letter to the state agencies provided in
pertinent part:
Survey B results indicate that your agency responded with the
answer “No” with respect to one or both of the latent prints.  For
your convenience, I have included with this letter another set of the
original photographs submitted to you with another blank survey
form and a set of enlarged photographs of each latent print and an
enlargement of areas from two of the fingerprints contained on the
fingerprint card.  These enlargements are contained within a clear
plastic sleeve that is marked with red dots depicting specific
fingerprint characteristics.
Please test your prior conclusions against these enlarged
photographs with the marked characteristics.  Please indicate the
results on the enclosed survey form and return to me by June 11,
1999.  You only need to complete the bottom portion, the third
part, of the survey form.  Any written narrative description or
response should be attached to the survey form.
I anticipate that this data must be made available to the defense
counsel and the court prior to the Daubert Hearing proceedings. 
Therefore, please insure that your handling of this matter is done
within the June 11, 1999 deadline.  The Daubert Hearing is
scheduled for July 7, 1999, and the trial is scheduled for September
13, 1999.
Memorandum in Support at 22.
 
47
It is hardly surprising, given the magnitude of what was at stake here, that all of the state
agencies at issue, with the exception of one, Missouri,  responded to the government’s tactics by
recanting and by filling out the new response forms so as to indicate that positive identifications
have now been made.   The government, in turn,  revised its report of the test, Government
Exhibit 6-4, so as to indicate that, except for Missouri, only positive identifications were returned
by the participating agencies. Memorandum in Support at 23. (The government’s newly revised
exhibit 6-4 is provided as Defense Exhibit 23).   This revised exhibit, moreover, provides no
indication that these state agencies ever returned anything other than positive identifications.  By
letter to the Court dated June 17, 1999, the government then provided this revised exhibit to the
Court, instructing the Court to “substitute[]” the exhibit for the one the government previously
provided in its exhibit book.  (Memorandum In Support at p. 23).   In  this fashion, the
government  attempted, like a magician, to make the original results of its experiment vanish into
thin air.
The government’s considerable efforts in this regard, however, have only succeeded in
highlighting the importance of the original test.  The study as originally conducted by the
government was a relatively fair experiment as to whether different examiners would at least be
able to reach the same conclusion when given the same prints to compare, and the test had
special significance  given the government’s decision to use the very prints at issue in Mitchell.  
The original unbiased results of the test speak volumes for themselves.   That the government has
subsequently been able to convince more than 20% of the participating examiners to change their
answers only serves to demonstrate the desperate straits that the government found  itself in and
the lengths to which the government will go in order to have its fingerprint evidence admitted.  
As a noted fingerprint examiner has aptly recognized, an examiner’s conclusion that a latent
print is unidentifiable must be considered “irrevocable,” as nothing is more “pitiful” than an
examiner’s subsequent attempt to change that conclusion:
Of course, the crucial aspect is the initial determination to render
the latents as unsuitable for identification purposes...this must be a
ruthless decision, and it must be irrevocable.  There is no more
48
pitiful sight in fingerprint work than to see an expert who has
decided that a mark is useless, then seeking to resuscitate the latent
to compare with a firm suspect.
John Berry, Useless Information, 8 Fingerprint Whorld 43 (Oct. 1982) .
In addition to the above discussed test, the government in Mitchell also conducted
experiments on its automated fingerprint identification system (“AFIS”).    On the basis of these
tests, the government made certain statistical claims with respect to the probability of two people
having identical fingerprints or identical “minutia subsets” of fingerprints.    The utter fallacy of
these statistical claims, as well as the serious methodological flaws that undermine these
experiments , became clear at the  Daubert hearing, the transcripts of which the Court will be
provided.  
Moeover, given that the tests in Mitchell were conducted solely for purposes of 
litigation, and have not been published or subjected to peer review, they do not constitute the
type of data or facts that an expert in the fingerprint field would reasonably rely upon, and, as
such, the tests should not even be considered by this Court.   See Evidence Code section 801(b);
United States v. Tran Trong Cuong, 18 F.3d 1132, 1143 (4th Cir. 1994) (“reports specifically
prepared for purposes of litigation are not by definition of a type reasonably relied upon by
experts in the particular field.”);  Richardson v.  Richardson-Merrell, Inc., 857 F.2d 823, 831
(D.C. Cir. 1988) (doctor’s testimony held inadmissable because, among other things, the
calculations that he relied upon had not been “published . . . nor offered . . . for peer review.”);
Perry v. United States, 755 F.2d 888, 892 (11th Cir. 1985) (expert’s testimony rejected where the
study upon which the expert relied had not been published or subjected to peer review).
Moreover, there is a particularly good reason why in the instant case the government’s
AFIS experiments in Mitchell should be published and subjected to peer review before they are
given consideration by a court of law.  The government in Mitchell attempted to utilize AFIS as
it has never been utilized before.   No previous attempts have ever been made to determine
fingerprint  probabilities from an AFIS system.   To the contrary, such systems have been
49
designed for an entirely different purpose -- to generate a number of  fingerprint candidates
which a human fingerprint examiner can then manually compare with the latent print under
consideration.   The extreme complexity of what the government has attempted to do in Mitchell
can readily be seen from the pleadings and transcripts in the case.  The following is an excerpt
from the description of the first experiment.
Each comparison was performed by two totally different software
packages, developed in two different countries by two different
contractors using independent teams of fingerprint and software
experts.  The results of both comparisons were mathematically
“fused” using software developed by a third contractor.
***
The two “matcher” programs calculate a measure of similarity between the
minutia patterns of two fingerprints.  In both cases, the scores of an identical mate
fingerprint is normalized to 1.0 (or 100%).  The statistical fusion program
combines the two scores by analyzing the most similar 500 (out of 50,000)
minutiae patterns.  The fusion operation discards 49,500 very dissimilar minutia
patters before calculating the fusion statistics.  As in the case of the “matcher”
programs, the fused similarity measure calculated by the fusion program is
normalized to 1.0 (or 100%).
(Memorandum In Support at 26).
Obviously, there are many valid questions regarding the software systems and
methodology that the “teams of fingerprint and software experts” utilized to conduct these
extremely complicated and novel experiments.  As courts have recognized, however, the proper
forum for such questioning, at least as an initial matter, is through publication and peer review,
not the courtroom.  See United States v. Brown, 557 F.2d 541, 556 (D.C. Cir. 1977) (holding that
novel hair analysis technique should not have been admitted and stating that “[a] courtroom is
not a research laboratory.”); Richardson, 857 F.2d at 831.  Peer review is especially important
here given that the government in Mitchell refused to even provide the defense with access to the
software packages that were used to run the experiments.(Memorandum In Support at p. 27).
Finally, the government’s novel AFIS experiments also need to be subjected to peer
review and publication before they are accepted in a court of law because the statistical
conclusions that the government  generated defy reality.    The government, for example,
50
asserted, on the basis of its AFIS experiments, that the probability of two people even having
four identical ridge characteristics in common “is less than one chance in 10 to the 27th power
. . .” (Gov’t Mem at 23.)   Yet as discussed above, the fingerprint literature contains examples of
people having 10 to 16 ridge characteristics in common.   Moreover, as one fingerprint expert
has recently acknowledged in explaining why an identification would never be made on the basis
of four or five matching points, a “million people” could possess those  four or five points of
similarity.   Commonwealth v. Daidone, 684 A.2d 179, 188 (Pa. Super. 1996). See also, Stoney,
Fingerprint Identification, supra, § 21-2.1.2 at 66(“ A correspondense of four minutiae may well
be found upon diligent, extended effort when comparing the full set of prints of one individual
with those from another person.”). Accordingly, there is clearly something amiss with respect to
the government’s novel efforts to create astronomical statistical probabilities from its AFIS
system.
In sum, the AFIS testing that the government  conducted in Mitchell for purposes of 
litigation would not reasonably be relied upon by an expert in the fingerprint field and it should
therefore not be relied upon by this Court.  
.     
4.
There is No Established Error Rate for Latent Print Comparisons, But It Is
Clear That Many Errors Do Occur
Given the lack of empirical validation studies that have been performed, it is not
surprising that there is no established error rate for latent print comparisons.   Nevertheless, the
government, without the benefit of any citation,  brazenly submitted in Mitchell that the error
rate is “zero”   (Gov’t Mem. at 19).   This claim, however, simply ignores the many documented
cases of erroneous fingerprint identifications.Any claim that the error rate is “zero” is patently
frivolous in light of the fact that “ both here and abroad there have been alarming disclosures of
errors by fingerprint examiners.”Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence
(3d. Ed 1999)§ 16-1, p. 740-741(describing two cases in England in 1991 and 1997 in which
misidentifications were made despite the fact that the British examiners insist on 16 points for an
identification and triple check fingerprint identifications.); James E. Starrs, Judicial Control Over
Previous page Top Next page