1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM INEFFICIENT EVIDENCE Alex Stein * Why set up evidentiary rules rather than allow fact finders to make decisions by considering all relevant evidence? This fundamental question has been the subject of unresolved controversy among scholars and policymakers since it was raised by Bentham at the beginning of the nineteenth century. This Article offers a surprisingly straightforward answer: An economically minded legal system must suppress all evidence that brings along a negative productivity-expense balance and is therefore inefficient. Failure to suppress inefficient evidence will result in serious diseconomies of scale. To operationalize this idea, I introduce a “signal-to-noise” method borrowed from statistics, science, and engineering. This method focuses on the range of probabilities to which evidence falling into a specified category gives rise. Specifically, it compares the average probability associated with the given evidence (the “signal”) with the margins on both sides (the “noise”). This comparison allows policymakers to determine the signal-to-noise ratio (SNR) for different categories of evidence. When the evidence’s signal overpowers the noise, the legal system should admit the evidence. Conversely, when the noise emanating from the evidence drowns the signal, the evidence is inefficient and should therefore be excluded. I call this set of rules “the SNR principle.” Descriptively, I demonstrate that this principle best explains the rules of admissibility and corroboration by which our system selects evidence for trials. Prescriptively, I argue that the SNR principle should guide the rules of evidence selection and determine the scope of criminal defendants’ constitutional right to compulsory process. * Professor of Law, Cardozo Law School. I thank Ronen Avraham, Mitch Berman, Rick Bierschbach, David Carlson, John Deigh, Aaron Edlin, Lee Anne Fennell, Lisa Kern Griffin, Louis Kaplow, Vik Khanna, Alexi Lahav, Larry Laudan, Maggie Lemos, Jonathan Nash, Gideon Parchomovsky, Mike Pardo, Ariel Porat, Alex Reinert, Jessica Roth, Kate Shaw, Peter Siegelman, Ed Stein, Stewart Sterk, Guy Wellborn, and participants in workshops and presentations at Cardozo Law School, Texas Law School, and the Twenty-Third Annual Meeting of the American Law & Economics Association for helpful comments and suggestions. I also thank Jessica Marshall for excellent research assistance. Copyright © 2014 Alex Stein. All Rights Reserved. Please do not distribute, cite or quote without author’s permission. 423 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 424 Alabama Law Review [Vol. 66:3:423 INTRODUCTION .......................................................................................... 424 I. MACROMANAGING EVIDENCE ............................................................ 429 A. A Tale of Two Systems ............................................................. 429 B. The SNR Principle ................................................................... 435 C. American Exceptionalism in the Law of Evidence .................. 439 II. THE SNR PRINCIPLE AND THE LAW OF EVIDENCE ............................... 443 A. Self-Asserting Evidence ........................................................... 444 1. Hearsay .............................................................................. 444 2. Other Evidence .................................................................. 450 B. Self-Serving Evidence .............................................................. 451 C. Speculative Evidence ............................................................... 455 III. COMPULSORY PROCESS ....................................................................... 460 CONCLUSION .............................................................................................. 469 APPENDIX .................................................................................................. 470 Calculation of SNR for Footnote 44 and Accompanying Text ........ 470 INTRODUCTION “Evidence,” wrote Bentham, “is the basis of justice.”1 This observation aptly describes our legal system, where the outcome of trials critically depends on the parties’ ability to produce information that substantiates their claims. Yet, not every piece of information counts as “evidence” in legal procedures. Evidence rules exclude certain types of information— even relevant ones—from bearing on the outcome of cases.2 This, of course, raises the question: Why? Suppression of relevant information as legally “inadmissible” or “insufficient” presents a serious puzzle. In this Article, I set out to resolve this puzzle and provide a comprehensive justification for the extant design of evidence law. I contend that our evidence-sorting rules share one important commonality: they are designed to ensure that only information that satisfies an adequate “signal-to-noise ratio” will be considered by fact finders and decide the outcome of cases. All information that parties submit to fact finders is comprised of a kernel of “signal” surrounded by “noise.” Under this taxonomy, “signal” refers to information reliable enough to allow the fact finders to determine the probability of the underlying allegation, and “noise” represents the exact opposite. Information not allowing the fact finders to make a reliable 1. 5 JEREMY BENTHAM, RATIONALE OF JUDICIAL EVIDENCE 1 (Fred B. Rothman & Co., 1995) (1827). 2. See GEORGE FISHER, EVIDENCE 1 (3d ed. 2013) (“Evidence law is about the limits we place on the information juries hear.”). 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 2015] Inefficient Evidence 425 determination of the relevant probability is “noise.” When the noise mutes the signal, the information becomes inefficient and the court should not admit it into evidence. In what follows, I call this information-sorting principle the “signal-to-noise ratio” or, in short, SNR. I posit that this principle underlies the design of our evidence law. More precisely, I argue that our evidence law works to prevent fact finders from relying on unacceptably noisy evidence—namely, evidence with a low SNR. The SNR principle is widely used in statistics, science, and engineering.3 As a broad concept, it represents an efficiency-driven approach to information management.4 However, scant attention has been paid to its implications for the law. In this Article, I hope to redress this omission by shedding light on the profound effect of the SNR principle on our law of evidence. The SNR principle focuses on the probabilities to which a given piece of information gives rise.5 These probabilities may fall within the same range, or cluster, on a 0–1 scale. Alternatively, they may be dispersed across the scale and far removed from each other. Any set of probabilities, clustered and dispersed alike, has an average value representing the most dependable probability that fact finders can elicit from the given information. For example, evaluate a set of clustered probabilities of 0.4, 0.5, and 0.6 compared to a set of dispersed probabilities of 0.1, 0.5, and 0.9. Both of these sets have an average value of 0.5. This average probability is the “signal” coming from the information. Any such signal stands between the outliers (the deviations from the mean) on the upper and the lower bounds of the probability scale. The difference between the signal and each outlier determines the “noise” level for the given set of probabilities.6 Unsurprisingly, a set of dispersed (wide- ranging) probabilities is always much noisier than a set of clustered (short- ranging) probabilities. This pivotal point is illustrated by my numerical 3. See, e.g., STEPHEN T. ZILIAK & DEIRDRE N. MCCLOSKEY, THE CULT OF STATISTICAL SIGNIFICANCE: HOW THE STANDARD ERROR COSTS US JOBS, JUSTICE, AND LIVES 23–25 (2007) (unfolding a straightforward explanation and statistical application of SNR); Johannes F. de Boer et al., Improved Signal-To-Noise Ratio in Spectral-Domain Compared with Time-Domain Optical Coherence Tomography, 28 OPTICS LETTERS 2067 (2003) (exemplifying SNR’s centrality for optics); M.J. Firbank et al., A Comparison of Two Methods for Measuring the Signal to Noise Ratio on MR Images, 44 PHYSICS IN MED. & BIOLOGY N261 (1999) (using SNR to determine the efficacy of magnetic resonance imaging systems); Paul Glasziou et al., When Are Randomised Trials Unnecessary? Picking Signal from Noise, 334 BRIT. MED. J. 349, 351 (2007) (using SNR to determine validity of clinical medical research); Christopher S. Yoo, Beyond Coase: Emerging Technologies and Property Theory, 160 U. PA. L. REV. 2189, 2194–95 (2012) (attesting that SNR determines the efficacy of wave-based communications and citing communication engineering literature). 4. See generally JOHN R. PIERCE, AN INTRODUCTION TO INFORMATION THEORY: SYMBOLS, SIGNALS AND NOISE (2d ed. 1980). 5. See ZILIAK & MCCLOSKEY, supra note 3, at 23–27. 6. See id. at 24. 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 426 Alabama Law Review [Vol. 66:3:423 example where the two probability sets—as I already mentioned—yield the same signal (0.5), but the noise of the dispersed set (0.4) is four times stronger than the noise of the clustered set (0.1). The two sets of probabilities and their underlying information thus markedly differ from each other. Information giving rise to the dispersed probabilities has a low SNR: 0.5/0.4=1.25. This SNR indicates that the noise embedded in the information is nearly as strong as its signal. Trying to elicit the truth from this information will consequently be more expensive than productive. On the other hand, information giving rise to the clustered probabilities has a very high SNR: 0.5/0.1=5. This SNR indicates that the information’s signal is five times stronger than the noise. Fact finders consequently will have no difficulty evaluating the information. To illustrate, consider an official weather report stating the depth of snow in New York City on February 27, 2007. Probabilities associated with this information are high and clustered. Without knowing their numerical values, one can easily see that the report’s SNR is high. After introducing those values—for example, 0.7, 0.8, and 0.9—one will see it more vividly. The signal embedded in the report equals 0.8, while the noise volume amounts to only 0.1. The report’s SNR thus equals 8 (0.8/0.1), with the signal being eight times stronger than the noise. This factor guarantees that fact finders’ evaluations of this and similar reports will align with the truth in nearly every case. Consider now an alibi witness with three perjury convictions who testifies at his brother’s robbery trial. This information gives rise to low probabilities, ones that are much closer to 0 than to 1. Remarkably, because these probabilities form a uniform cluster, the witness’s testimony has a high SNR as well, although, of course, not as high as in the weather report example. Here, too, one can see that this SNR is high even without assigning numerical values to the probabilities. Based on the experience we have with similar witnesses, assume that these values are 0.1, 0.2, and 0.3. Under this realistic assumption, the testimony’s SNR will equal 2 (0.2/0.1). The testimony’s signal—0.2—is much weaker than the signal embedded in the weather report. The testimony’s noisiness, however, is two times lower than the signal, which guarantees that fact finders’ evaluations of this and similar testimonies will virtually never fall far off from the truth. These examples show that whenever the range of the relevant probabilities is short, their signal will be much greater than the noise. Information that gives rise to a clustered probability—high, low, or in- between—therefore always qualifies as good evidence. This information will help fact finders reach the right decision and will virtually never mislead them. Hence, it is efficient and courts should always admit it into evidence. 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 2015] Inefficient Evidence 427 Finally, consider a witness testifying in a murder trial that she heard from her friend—out of court—that the defendant killed the victim. This testimony is a classic example of “hearsay”—information that our law generally excludes from the category of admissible evidence.7 This exclusion is fully justified. A statement made by an out-of-court declarant is either true or false, but whether it is true or false is unknowable. Fact finders consequently need to evaluate the statement’s probability of being true rather than false. In the case at bar, fact finders need to know the declarant’s motives for making the statement and whether he properly perceived and remembered the alleged murder incident. Alas, these credibility cues are not available. Absent credibility cues—positive, negative, or mixed—the declarant’s statement gives rise to a wide range of probabilities that cover all possible hypotheses about the statement’s trustworthiness. These probabilities form three clusters. One of those clusters occupies the upper side of the probability scale (close to 1); another cluster occupies the scale’s lower side (close to 0); and yet another cluster occupies the center (0.5). This dispersion—or variance—of probabilities indicates that the statement has a low SNR, as does all hearsay evidence unaccompanied with credibility cues. The noise coming from the statement mutes its signal, which makes the statement unworthy of fact finders’ consideration. This statement—and, indeed, all uncorroborated hearsay evidence—is too costly to evaluate relative to its informational benefit. Bringing it into the fact-finding process would increase the marginal cost of errors and error avoidance as a total sum.8 Hence, it is inefficient and courts should not admit it into evidence. In the pages ahead, I use the SNR principle to explain our system of evidence that operates with the help of admissibility and sufficiency rules. Admissibility rules are the central core of our law of evidence. They include the hearsay doctrine,9 the rule against character evidence,10 the conditions for admitting expert testimony,11 and a number of other rules. Sufficiency rules encompass the corroboration requirements for accomplice testimony and some other categories of evidence. I demonstrate that our evidence law works to make sure that fact finders base their decisions only on information that gives rise to clustered probabilities and, consequently, has a high SNR. The law achieves this effect by disqualifying information associated with dispersed probabilities and therefore a correspondingly low 7. See FED. R. EVID. 802. 8. See RICHARD A. POSNER, ECONOMIC ANALYSIS OF LAW 757–58 (8th ed. 2011) (observing that minimization of the aggregate cost of errors and error avoidance is a fundamental economic goal of procedural law). 9. See FED R. EVID. 801–807. 10. See id. 404. 11. See id. 702. 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 428 Alabama Law Review [Vol. 66:3:423 SNR. Importantly, the law makes these pre-rulings in relation to categories of evidence instead of asking judges to carry out a cost–benefit analysis of individual items of evidence.12 The resulting saving of adjudicative expenses makes these pre-rulings efficient from an economic standpoint.13 My account of evidence law is not merely descriptive. I accompany it with two significant normative contributions to legal theory. First and most important, I show that the SNR principle decisively resolves the debate about the purpose of evidence law.14 Many scholars, beginning with Bentham,15 call for the abolition of all admissibility and corroboration rules.16 They argue that fact finders should evaluate all relevant evidence on a case-by-case basis without prior selection, as they do in most countries in the world.17 This argument portrays our evidence law as yet another problematic example of American exceptionalism.18 Bringing the SNR principle into this debate underscores our system’s need to macromanage evidence. American courts process millions of cases every year.19 This unparalleled volume of litigation makes it imperative for our system to minimize the total cost of errors and error-avoidance in fact- finding.20 To achieve this socially beneficial result, the system must get rid of inefficient evidence: one that increases the cost of fact-finding without 12. Under Federal Rule of Evidence 403 and its state equivalents, judges retain their power to suppress any individual item of evidence if its prejudicial or wasteful effect on fact-finding substantially outweighs its probative value. This residual rule supplements the category-based method of selecting evidence chosen by our legal system. See also infra note 37. 13. See infra Part II. 14. See ALEX STEIN, FOUNDATIONS OF EVIDENCE LAW 107–40 (2005) (arguing that evidence law should be geared toward socially desirable allocation of the risk of error); WILLIAM TWINING, RETHINKING EVIDENCE: EXPLORATORY ESSAYS 192–226 (2d ed. 2006) (outlining and analyzing the debate about the nature and purposes of evidence law). 15. See BENTHAM, supra note 1, at 477–94; WILLIAM TWINING, THEORIES OF EVIDENCE: BENTHAM & WIGMORE 66–88 (1985). 16. See, e.g., David P. Bryden & Roger C. Park, “Other Crimes” Evidence in Sex Offense Cases, 78 MINN. L. REV. 529, 561 (1994) (“For centuries, the movement has been toward abolition of those exclusionary rules that have as their basis the danger of misleading the fact-finder. Jurists and scholars alike increasingly have agreed with Bentham that technical rules of evidence designed to prevent fact- finders from making mistakes are, at best, more trouble than they are worth.”). See also infra notes 58, 62–63 and accompanying text. 17. See MIRJAN R. DAMAŠKA, EVIDENCE LAW ADRIFT 1–25, 94–101 (1997); see also Kenneth Culp Davis, An Approach to Rules of Evidence for Nonjury Cases, 50 A.B.A. J. 723, 726 (1964) (“Our sick body of evidence law will get well sooner if our American evidence doctors will consult with some European evidence doctors.”); infra notes 62–63 and accompanying text. Notably, the greatest British evidence scholar, Rupert Cross, made a striking statement: “I am working for the day when my subject is abolished.” TWINING, supra note 14, at 1. 18. See infra Section I.C. 19. See Judith S. Kaye, State Constitutional Law and the State High Courts in the 21st Century, 70 ALB. L. REV. 825, 827 (2007) (attesting that “tens of millions of cases—ninety-eight percent of our nation’s litigation— . . . annually come before the state courts”); Chief Justice Margaret H. Marshall of Massachusetts, Remarks to Symposium: Great Women, Great Chiefs, 74 ALB. L. REV. 1595, 1601 (2011) (“Every year millions of cases are filed in state courts . . . .”). 20. See STEIN, supra note 14, at 141. 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 2015] Inefficient Evidence 429 significantly improving the accuracy of court decisions.21 The system therefore will do well to suppress all evidence that has a low SNR. My additional normative contribution concerns the Compulsory Process Clause of the Sixth Amendment.22 The extent to which this Clause prohibits courts and lawmakers from suppressing criminal defendants’ evidence is presently unclear.23 Adoption of the SNR principle will remove this constitutional ambiguity. Evidence with a low SNR might raise a reasonable doubt as to whether the defendant committed the alleged crime. This factor favors the admission of such evidence. Defendants, however, should not be absolutely free to rely on such evidence, but they should be entitled to adduce it when better evidence is not within their reach. The defendant’s showing of necessity should thus make him entitled to present any exculpatory evidence, including one with a low SNR. Suppressing such evidence would violate the Compulsory Process Clause. Structurally, the Article unfolds as follows. In Part I, I explain how the SNR principle works and demonstrate its normative superiority over unregulated fact-finding. In Parts II and III, respectively, I use this principle to explain our system of evidence selection and to determine the scope of criminal defendants’ entitlement to compulsory process. A short conclusion follows. I. MACROMANAGING EVIDENCE A. A Tale of Two Systems Consider two legal systems: one large (System L) and another small (System S). System L processes 1,000,000 cases a year. System S has a much smaller inflow of cases: just 100,000. System L’s workload is thus ten times that of System S’s. The two systems are identical in every other respect: their laws are the same and their courts are equally competent and speedy. My final assumption is scarcity of resources: neither of the two systems can expend an unlimited amount of resources on its operation. Both systems must limit their expenditures to allow citizens to enjoy other amenities as well. How would you design evidence laws for these two systems? Importantly, would you design one evidence law, or two? These questions call for a cost–benefit analysis. Adjudicative fact- finding generates an indispensable benefit for society: it enables courts to 21. See id. at 141–43. 22. U.S. CONST. amend. VI (“In all criminal prosecutions, the accused shall enjoy the right . . . to have compulsory process for obtaining witnesses in his favor . . . .”). 23. See infra Part III. 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 430 Alabama Law Review [Vol. 66:3:423 properly assign entitlements and liabilities to parties. This benefit, however, is not cost free. Adjudicative fact-finding implicates two social costs: the cost of accuracy and the cost of errors.24 The cost of accuracy encompasses the legal system’s expenditures on fact-finding procedures that reduce the incidence of error. The cost of errors originates from incorrect factual findings produced by the system. These findings distort courts’ assignments of entitlements and liabilities, thereby causing harm to parties. The overarching goal of the law of evidence is to achieve a socially optimal tradeoff between these two costs.25 Evidentiary rules ought to improve the accuracy of court decisions as cheaply as possible. To this end, they ought to minimize the cost of errors and error avoidance as an aggregate sum. This task is easy to formulate, but difficult to accomplish. To make the task manageable, policymakers must split it up into three distinct subtasks. As an initial matter, policymakers need to formulate the standards of proof for civil and criminal trials. These standards are necessary because fact finders will have to make decisions under conditions of uncertainty and consequently need to have probability thresholds for making those decisions. Those thresholds should reflect society’s preferences in the allocation of the risk of error. Policymakers consequently must determine, for every area of the law, whether society favors false positives (mistaken impositions of liability) over false negatives (mistaken exonerations), or vice versa, and how intense this preference is.26 This factor is crucial because any proof standard that reduces the incidence of false positives increases the number of false negatives, and vice versa.27 To convict a greater number of guilty defendants, policymakers must lower the probability threshold for convictions. Under a low threshold, however, courts will convict a greater number of innocents. To protect the innocent from erroneous conviction, policymakers would have to move the probability threshold upwards, but then a greater number of guilty criminals would go scot-free. Policymakers consequently must decide how many guilty criminals they are willing to free from punishment in order to protect one innocent defendant against erroneous conviction. If this number is very high, policymakers should adopt the “beyond a reasonable doubt” standard for criminal trials. Under this standard, the prosecution will have to prove each and every element of the alleged crime beyond a reasonable doubt. Any 24. See POSNER, supra note 8, at 757–58. 25. See id. at 819–24. 26. See id. at 827. 27. See id. at 827 n.2 (“Trading off Type I and Type II errors is a pervasive feature of evidence law.”). 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 2015] Inefficient Evidence 431 reasonable doubt as to whether the defendant committed the crime will consequently require fact finders to acquit him. For civil litigation, policymakers should endorse a different allocation of the risk of error. In civil cases, there is no reason to favor false positives over false negatives, or vice versa. Hence both types of error should be given equal weight, and policymakers should favor a proof standard that maximizes the number of correct court decisions, namely, the “preponderance of the evidence” standard. This standard should apply both to elements of the suit and to affirmative defenses.28 Under this standard, when fact finders are undecided about an element of the suit, they should dismiss the suit. Correspondingly, when fact finders are undecided about an affirmative defense, they should deny the defendant that defense. Policymakers’ next mission is to formulate the basic gatekeeping criteria for evidence selection. The criteria must separate evidence that can satisfy the chosen proof standards from evidence that cannot. The gatekeeping criteria must therefore consist of evidence-sorting rules that will give courts the power to admit evidence that has the best probative potential, while excluding all inferior evidence from fact finders’ consideration.29 Formulating these criteria and rules is not difficult. Evidence that lends prima facie support to a party’s claim or defense is potentially capable of satisfying any proof standard. As a general matter, fact finders can find “preponderance” or “proof beyond a reasonable doubt” in any evidence that tends to prove the relevant claim or defense. Therefore, policymakers will do well to set up a broad admissibility provision authorizing courts to admit any evidence that is of consequence to the underlying claim or defense.30 Policymakers must supplement this provision with rules that will motivate parties to adduce the best available evidence. These rules will require parties to call witnesses with direct knowledge of the relevant facts;31 to rely on the most qualified expert witnesses in matters calling for scientific or professional expertise;32 to adduce original documents 28. See STEIN, supra note 14, at 143–48; cf. Eric L. Talley, Law, Economics, and the Burden(s) of Proof, in RESEARCH HANDBOOK ON THE ECONOMICS OF TORTS 305 (Jennifer Arlen ed., 2013) (examining cost-minimization and other core economic goals of proof burdens). 29. See generally Dale A. Nance, The Best Evidence Principle, 73 IOWA L. REV. 227 (1988) (developing a comprehensive “best evidence” principle and unfolding its normative virtues and explanatory power). 30. Cf. FED. R. EVID. 401 (categorizing as generally admissible evidence that “has any tendency to make a fact more or less probable than it would be without the evidence” when “the fact is of consequence in determining the action”). 31. Cf. id. 602. 32. Cf. id. 702. 1 STEIN 423-470 (DO NOT DELETE) 1/22/2015 1:00 PM 432 Alabama Law Review [Vol. 66:3:423 whenever these are available;33 to avoid delays;34 and to minimize undue prejudice to opponents.35 The third, final, and most difficult matter that policymakers must consider is noisy evidence. Evidence falling into the “noisy” category is probabilistically ambiguous. This characteristic attaches to three categories of evidence: self-asserting, self-serving, and speculative. Evidence is self-asserting when it contains an unexaminable statement of facts, which fact finders are asked to accept on faith. Consider a witness, Alice, who testifies in a criminal trial that her coworker, Harold, told her that he saw the defendant robbing the victim at gunpoint. The prosecutor uses Alice’s testimony to prove the robbery accusation, while Harold does not appear as a witness in the proceeding. Here, Harold’s statement is self- asserting because its credibility is unverifiable. Based on this statement alone, fact finders can ascribe any probability to the robbery accusation. The probability can be high, low, or in-between—a characteristic that makes Harold’s statement probabilistically ambiguous. Evidence is self-serving when its producer has a motive and opportunity to fabricate it. Consider a suit against a dead person’s estate. The plaintiff testifies that the dead person loaned from him $50,000 and did not repay the loan. This testimony is self-serving because the plaintiff knows that his attribution of a $50,000 debt to the dead person cannot be controverted. The dead person cannot stand up and deny the plaintiff’s allegations. The plaintiff consequently can say in court anything he wants without facing rebuttal and penalties for perjury. Evidence is speculative when it pools together cases with some shared similarities while suppressing their differences, thereby driving fact finders to treat the cases as identical. Consider a person accused of burning four of his houses over a nine-year period in order to recover insurance money. To prove the alleged fraud, the prosecution calls an actuary from the insurance industry to testify that a person’s chances of having four of her houses accidentally destroyed by fire over a nine-year period are one in 1.773 trillion.36 This testimony properly rules out the accidental fire scenario. Yet, it is still speculative because it pools together cases in which a person burns his own houses to recover money from the insurer with cases in which a person has an enemy—an underworld enemy, perhaps—who sets fire to the person’s houses. Defendants falling into the first category of cases are perpetrators of insurance fraud. Defendants belonging to the second category are victims of arson. 33. Cf. id. 1002. 34. Cf. id. 403. 35. Cf. id. 36. This example is drawn from United States v. Veysey, 334 F.3d 600, 603–04 (7th Cir. 2003).
Description: