Click to return home

Statistical Decision Theory
and the Burdens of Persuasion:
Completeness, Generality, and Utility


© 1996 DH Kaye. This paper appears in the International Journal of Evidence and Proof , vol. 1, 1997, pp. 313-315

Abstract

This paper comments on remarks by Professors Friedman and Allen on the ability of statistical decision theory to elucidate the various burdens of persuasion applied in the law. It questions the need for Professor Friedman's proposal for an extra variable representing the quality or completeness of evidence. It also shows that, contrary to Professor Allen's claim, the decision-theory analysis is quite general, and it suggests that the fact that an optimal decision rule may fail to minimize the number of erroneous verdict in a specific set of cases or to achieve a specific mix of the two types of erroneous verdicts does not undermine the explanatory or justificatory power of the decision-theoretic analysis.


"The majority of our opinions being founded on the probability of proofs, it is indeed important to submit it to calculus." -- Pierre Simon, Marquis de LaPlace (1819), Essai Philosophique sur les Probabilités (A Philosophical Essay on Probability, Truscott & Emory transl. 1952:109) New York: Dover Publications.

The faith of the early 17th century in has given way to the pragmatism and skepticism of the post-modern 20th. Neither Richard Friedman nor Ronald Allen share LaPlace's optimism. In the space allotted here, I cannot hope to offer a comprehensive discussion of the many issues on which they alight. So I shall not try. Instead, I focus on a single issue -- the burdens of persuasion.

Professor Friedman describes how the difference between the more-probable-than-not standard of most civil litigation and the beyond-a-reasonable-doubt standard of criminal cases can be explained in terms of the maximization of "social utility." To my mind, the separation of personal probability (which relates to the evidence in the case) and utility or loss functions (which relate to the costs of erroneous verdicts) is a tremendous analytical advance. Admittedly, analyzing the properties of alternative decision rules, such as the more-probable-than-not standard and proportional liability in mass toxic tort cases, is not determinative (as Professor Friedman is quick to point out), and there is ample room for argument over the best choice for the loss function. (Orloff and Steadinger, 1983). Nevertheless, the analysis is helpful in understanding what rule or rules the law does and should employ.

The value of Professor Friedman's K/Q rule is less clear. His concern with "quality" or "completeness" is a call to consider second-order probabilities -- degrees of confidence in a first-order probability judgment. In Kaye (1986), I suggested how gaps in the evidence can be handled with first-order probabilities. If my more pedestrian approach is inadequate to encompass doubts about the quality and completeness of evidence, then it would be valuable to elucidate a theory that uses a second-order probability (analogous to Friedman's Q). But that endeavor will demand much more than tossing a few algebraic symbols into a formula. It will require some inquiry into the properties of the more complex decision rule. Frankly, I doubt that any of us lawyers who are writing about these matters have sufficient skill and knowledge to get very far, but I would be delighted to be proved wrong. The proof, I would have to say, is in the pudding.

What does Professor Allen's essay reveal about the value of using statistical decision theory to understand the burden of persuasion? Apparently, he considers the use of statistical loss functions a "formalism," "algorithm," or "legal theorem" that must do battle against a competing desire for "judgment in legal affairs." The dichotomy is false. The mathematical properties of decision rules have little or nothing to do with the hoary debate over law versus equity, rules versus principles, or the like. No "tension between algorithms and judgment" arises from dissecting or appraising a legal standard that requires judgment to apply. The analysis explains the instructions given to jurors; the jurors must implement them using their best judgment. The mathematics does not diminish the importance of that judgment, but directs attention to how it should be applied. Nothing turns on the ill-defined and sweeping terminology on which Professor Allen leans.

To the extent that Professor Allen is more specific, he sends a mixed message about the decision-theoretic treatment of the burden of persuasion. He agrees with Friedman that the scholarship has proved fruitful, but insists that the "various proofs that employing the civil burden of a preponderance of the evidence standard will minimize or optimize errors are all false as general proofs (although not as special cases)." This is a remarkable statement for many reasons,1 the most important being that there are no such proofs. There are proofs, like the one that Professor Friedman sketches, that a given decision rule maximizes expected utility or minimizes expected losses. Finding the decision rule that minimizes the expected value of a prescribed loss function is an extremely general procedure.

I suspect that Professor Allen's criticisms can be traced to a widely shared misunderstanding of that procedure. He asserts that "the various proofs" are faulty because they "neglected base rates and accuracy of probability assessments of liability." However, this "neglect" has nothing to do with the generality of the proofs. The proofs remain true for all possible base rates. As I have indicated, the optimal decision rule minimizes the expectation of some function of the losses. (DeGroot, 1975:276), and the more-probable-than-not standard minimizes expected losses -- not the actual number of errors -- when every erroneous verdict for a plaintiff entails the same loss as every erroneous verdict for a defendant.

Professor Allen, it seems, would prefer a rule that minimizes the actual frequency of errors (or that produces a particular mix of errors of one kind as opposed to another). Of course, that is neither an attack on "algorithms" nor a demand for greater generality, but a call for a different criterion for choosing a decision rule. To find a rule that accomplishes this different objective, we would need to consider the distributions of probabilities and the mix of cases in which one side as opposed to the other should prevail. But neither Allen nor anyone else has offered any reason to prefer a particular distribution of errors. Indeed, DeKay (1996:130), on which Professor Allen ironically relies, concludes that "the logic behind a frequency-based policy is . . . exactly backwards." And, even if Professor DeKay and other decision theorists are wrong about the importance of attending to expected utility or losses regardless of the effects on actual error distributions, the application of probability and statistics to the law's standards has proved its value. Only in the context of such an analytical framework can one distinguish between the concepts like utility, probability, and error that must be examined to arrive at a suitable conception of the burdens of persuasion.

In short, attending to the form and components of the loss function helps make sense of the relatively undemanding more-probable-than-not standard in civil cases, to appreciate why it should not apply to criminal cases, and to see how it generalizes to multiple-causation cases. (On the last point, see Kaye, 1982; but see Shavell, 1987:116-117). The value of such analytic work does not depend on the psychological question whether jurors assess probabilities of propositions consistently with the outcomes of Bayes' rule. Neither does it depend on whether jurors should agree that, computational complexity notwithstanding, it is desirable to assign the same probabilities to all truth-functionally equivalent propositions (which might require them to attend to Bayes' rule). These are interesting matters in their own right, but like many other writers, I have discussed them elsewhere (e.g., Kaye, 1979, 1991), and repetition is not persuasion.

Notes

1. This terminology, like the rhetoric of "algorithms" and "formalism," is very odd. Any valid proof using algebra or calculus establishes a general truth, and no valid proof can be false. Then, there are smaller infelicities. For example, the "Newtonian physics" that Allen refers to as a "special case" of Einstein's Special Theory of Relativity is not a special case, but a limiting case that is only asymptotically correct.

References

Degroot, Morris H. (1975), Probability and Statistics . Reading, Massachusetts: Addison-Wesley Publishing Co.

DeKay, Michael L. (1996), The Difference Between Blackstone-like Error Ratios and Probabilistic Standards of Proof, Law and Social Inquiry , 21:95-132.

Kaye, D.H. (1991), Credal Probablity, Cardozo Law Review 13:647-656.

Kaye, D.H. (1986), Do We Need a Calculus of Weight to Understand Proof Beyond a Reasonable Doubt?, Boston University Law Review 66 (657-672), reprinted in Tillers, Peter and Green, Eric, eds. (1988), Probability and Inference in the Law of Evidence: The Limits of Bayesianism , Boston, Massacusetts: D. Reidel Publishing Co.

Kaye, D.H. (1982), The Limits of the Preponderance of the Evidence Standard: Justifiable Naked Statistical Evidence and Multiple Causation, American Bar Foundation Research Journal , 1982:487-516, reprinted in Twining, William and Stein, Alex, eds. (1992), Evidence and Proof , Aldershot, England: Dartmouth Publishing Co.

Kaye, D.H. (1979), The Laws of Probability and the Law of the Land, University of Chicago Law Review 47:34-56, reprinted in Edward J. Imwinkelried & Glen Weisenberger eds. (1996), An Evidence Anthology , Cincinnati, Ohio: Anderson Publishing Co.

Orloff, Neil and Steadinger, Jery (1983), A Framework for Evaluating the Preponderance of the Evidence Standard, University of Pennsylvania Law Review 131:1159-1174.

Shavell, Steven (1987), Economic Analysis of Accident Law , Cambridge, Massachusetts: Harvard University Press.

updated 2/18/97