ebook img

Some Counterexamples to Causal Decision Theory1 Andy Egan PDF

25 Pages·2005·0.28 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Some Counterexamples to Causal Decision Theory1 Andy Egan

Some Counterexamples to Causal Decision Theory1 Andy Egan Australian National University Introduction Many philosophers (myself included) have been converted to causal decision theory by something like the following line of argument: Evidential decision theory endorses irrational courses of action in a range of examples, and endorses “an irrational policy of managing the news”.2 These are fatal problems for evidential decision theory. Causal decision theory delivers the right results in the troublesome examples, and does not endorse this kind of irrational news-managing. So we should give up evidential decision theory, and be causal decision theorists instead. Unfortunately, causal decision theory has its own family of problematic examples for which it endorses irrational courses of action, and its own irrational policy that it is committed to endorsing. These are, I think, fatal problems for causal decision theory. I wish that I had another theory to offer in its place. 1. The Case against Evidential Decision Theory Evidential decision theory says that the action that it’s rational to perform (ignoring the possibility of ties) is the one with the greatest expected utility – the one such that your expectations for how the world will turn out, conditional on your 1 Special thanks are due to David Braddon-Mitchell for the series of conversations that led to this paper, and for many further conversations as it was in progress. Thanks also to Adam Elga, Brian Weatherson, Karen Bennett, Daniel Stoljar, Alan Hajek, James Joyce, and audiences at MIT, the University of Pittsburgh, Oxford University, and the ANU Philosophical Society for very helpful questions, comments, and objections. 2 Lewis 1981. 1 performing it, are greater than the expectations conditional on performing some other action.. So the action that it’s rational to perform will also be the one that you (or a friend with you’re your own interests in mind, and with the same ideas about where your interests lie as you have) would be happiest to learn that you had performed. The case against evidential decision theory is based upon examples like the following: The Smoking Lesion Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should. (Set aside your theoretical commitments and put yourself in Susan’s situation. Would you smoke? Would you take yourself to be irrational for doing so?)3 Causal decision theory distinguishes itself from evidential decision theory by delivering the right result for The Smoking Lesion, where its competition – evidential decision theory – does not. The difference between the two theories is in how they compute the relative value of actions. Roughly: evidential decision theory says to do the 3 This example is a standard medical Newcomb problem, representative of the many to be found in the literature. The original Newcomb’s problem is from Nozick 1969. For some excellent discussions of medical (and other) Newcomb problems, see (among many others) Gibbard and Harper 1976, Eells 1982, Lewis 1979 and Lewis 1981. 2 thing you’d be happiest to learn that you’d done, and causal decision theory tells you to do the thing most likely to bring about good results. Evidential decision theory tells Susan not to smoke, roughly because it treats the fact that her smoking is evidence that she has the lesion, and therefore is evidence that she is likely to get cancer, as a reason not to smoke. Causal decision theory tells her to smoke, roughly because it does not treat this sort of common-cause based evidential connection between an action and a bad outcome as a reason not to perform the action. Let’s look at how the differences between the formal theories deliver these results: Following Lewis, let a dependency hypothesis be a proposition which is maximally specific about how things that the agent cares about depend causally on what the agent does. Also following Lewis, let us think of such propositions as long conjunctions of subjunctive conditionals (of the appropriate, non-backtracking kind) of the form, if I were to do A, then P. (Written, from now on, “AP”.) The difference between causal and evidential decision theory is that causal decision theory privileges the agent’s unconditional assignment of credences to dependency hypotheses in determining the relative values of actions. If the H’s form a partition of the worlds that the agent assigns non-zero credence, the value assigned to an action A by evidential decision theory (henceforth EDT) is given by: VAL = (cid:1143) c(H|A)v(HA) EDT H (Note a harmless ambiguity: I’m using ‘A’ to name both an action and the proposition that the agent performs that action.) 3 In particular, in the case of the partition of dependency hypotheses (let these be the Ks), the value assigned by EDT is given by: VAL = (cid:1143) c(K|A)v(KA) EDT K The important thing to notice about this formula is that it’s the agent’s conditional credences in dependency hypotheses that figure in it. The value assigned by causal decision theory (henceforth CDT) is given by: VAL = (cid:1143) c(K)v(KA) CDT K The crucial difference is that now the assignments of values to actions are sensitive only to the agent’s unconditional credences in dependency hypotheses, not her credences conditional on her performing A. The effect of this is to hold fixed the agent’s beliefs about the causal structure of the world, and force us to use the same beliefs about the causal order of things in determining the choiceworthiness of each candidate action. Rather than the expected payoffs of smoking being determined by reference to how Susan thinks the causal structure of the world is likely to be, conditional on her smoking, and the expected payoffs of not smoking determined by reference to how she thinks the causal structure of the world is likely to be, conditional on her not smoking, the expected payoffs of both smoking and not smoking are determined by reference to Susan’s unconditional beliefs about how the causal structure of the world is likely to be. Cases like The Smoking Lesion motivate the move from EDT to CDT. In The Smoking Lesion¸ there is a strong correlation between smoking and getting cancer, despite the fact that smoking has no tendency to cause cancer, due to the fact that smoking and cancer have a common cause. Still, since Susan’s c(CANCER|SMOKE) is 4 much higher than her c(CANCER|NOT SMOKE), EDT assigns not smoking a higher value than smoking. And this seems wrong. So we have an argument against EDT: The correct theory of rational decision won’t endorse any irrational actions or policies. In The Smoking Lesion, EDT endorses an irrational course of action: it’s irrational for Susan not to smoke, and EDT endorses not smoking. EDT also endorses an irrational policy: it endorses a policy of performing the action with the greatest evidential value, rather than the action with the best expected causal upshot. So EDT isn’t the correct theory of rational decision. CDT, on the other hand, uses the agent’s unconditional credences in dependency hypotheses to assign values to actions. The effect of this is to make our assignments of values to actions blind to the sort of common-cause correlations that make EDT’s value assignments in The Smoking Lesion go bad. Causal decision theory now looks very attractive. It gets the cases that made trouble for EDT right, and it seems to get them right for the right reasons – by assigning the agent’s causal beliefs a special role. 3. The Case against Causal Decision Theory Causal decision theory is supposed to be a formal way of cashing out the slogan, “do what you expect will bring about the best results”. The way of implementing this sound advice is to hold fixed the agent’s unconditional credences in dependency hypotheses. The resulting theory enjoins us to do whatever has the best expected outcome, holding fixed out initial views about the likely causal structure of the world. 5 The following examples show that these two principles come apart, and that where they do, causal decision theory endorses irrational courses of action. (Obviously I think that each of the cases succeeds in showing this. But it’s not important that you agree with me about both cases. For my purposes, all I need is one successful case.) The Murder Lesion Mary is debating whether to shoot Alfred. If she shoots and hits, things will be very good for her. If she shoots and misses, things will be very bad. (Alfred always finds out about unsuccessful assassination attempts, and he is sensitive about such things.) If she doesn’t shoot, things will go on in the usual, okay-but- not-great kind of way. She thinks that it is very likely that, if she were to shoot, then she would hit. So far, so good. But Mary also knows that there is a certain sort of brain lesion that tends to cause both murder attempts and bad aim at the critical moment. Happily for most of us (but not so happily for Mary) most shooters have this lesion, and so most shooters miss. Should Mary shoot? (Set aside your theoretical commitments and put yourself in Mary’s situation. Would you shoot? Would you take yourself to be irrational for not doing so?) The Psychopath Button4 Paul is debating whether to press the ‘kill all psychopaths’ button. It would, he thinks, be much better to live in a world with no psychopaths. Unfortunately, Paul is quite confident that only a psychopath would press such a button. Paul 4 This case was suggested to me by David Braddon-Mitchell. 6 very strongly prefers living in a world with psychopaths to dying. Should Paul press the button? (Set aside your theoretical commitments and put yourself in Paul’s situation. Would you press the button? Would you take yourself to be irrational for not doing so?) It’s irrational for Mary to shoot. It’s irrational for Paul to press.5 In general, when you are faced with a choice of two options, it’s irrational to choose the one that you confidently expect will cause the worse outcome.6 Causal decision theory endorses shooting and pressing. In general, causal decision theory endorses, in these kinds of cases, an irrational policy of performing the action which one confidently expects will cause the worse outcome. The correct theory of rational decision will not endorse irrational actions or policies. So causal decision theory is not the correct theory of rational decision. What’s generating the problem is that the very same mechanism that allows causal decision theory to deliver the right results in cases like The Smoking Lesion leads it to deliver the wrong results for cases like The Murder Lesion and The Psychopath Button. Let’s look at what happens in The Murder Lesion. (The analysis of The Psychopath Button will be relevantly similar.) Let S be the proposition that Mary shoots, 5 Some people report that they lack the clear intuition of irrationality for the Murder Lesion case. Pretty much everyone seems to have the requisite intuition for The Psychopath Button, though. That’s enough for my purposes. Personally, I think both cases work as counterexamples to causal decision theory. But all I need is that at least one of them does. 6 Whether it’s irrational in a particular case depends, of course, on just what the payoffs are. It can be worth doing something that’s more likely than not to cause a bad outcome if the low-probability good outcome is good enough. But in the cases above (and as spelled out below), it’s better not to do the thing that you expect will cause the worse outcome. See below for some sample numbers. 7 and H the proposition that Mary hits. The relevant partition of dependency hypotheses is {SH, S¬H}. Some constraints on Mary’s credences: c(SH) > .5. (Because she’s been going to the shooting range, the gun is well-maintained, accurate and reliable, Alfred is a large, slow-moving target, etc.) c(SH|S) < .5 (Because if she shoots, it’s very likely because she has the lesion, and if she has the lesion, she’s very likely to have bad aim when push comes to shove.)7,8 Mary's value assignments: v(S.H) = 10 v(S.¬H) = -10 v(¬S) = 0 . If Mary is a causal decision theorist, she must use c(SH), not c(SH|S), when 7 Another reason: We know that Mary’s c(H|S) < .5, since shooting is such good evidence for having the lesion, and her credence that she hits conditional on both shooting and having the lesion is very low. Given that, we can prove that c(SH|S) < .5: By the definition of conditional probability, c(SH|H) = c(S & SH)/c(S) Since every world in which both S and SH are true is a world in which H is true as well, c(S & SH) ≤ c(SH). So we know that: c(SH|H) ≤ c(SH)/c(S). Again by the definition of conditional probability, c(SH)/c(S) = c(H|S). So c(SH|H) ≤ c(H|S) < .5. 8 Note, for future reference, that c(S) must be < .5 for these credences to be coherent. 8 she's determining the relative values of shooting or not. (Since it’s unconditional credences in dependency hypotheses that feature in CDT’s formula for determining the choiceworthiness of actions.) So shooting is going to come out better than not shooting.9 But that’s the wrong result. It’s irrational for Mary to shoot. Unfortunately, if that’s right, then causal decision theory is wrong. The same phenomenon occurs in a particularly striking way in time travel cases. Suppose that you have a time machine, and you are convinced that time travel works in the single-timeline, no-branching way outlined by Lewis (1976). You want to use your time machine to preserve some document, thought to be lost in the fire at the library of Alexandria. One option is to attempt to surreptitiously spirit the document out of the library before the fire. Another is to attempt to prevent the fire from ever happening. If you don’t have a firm opinion about which course you’ll actually pursue, you’re likely to be confident that, if you were to attempt to prevent the fire, you would succeed. (After all, you’re competent and knowledgeable, you have many willing and able accomplices, access to excellent equipment, plenty of time to plan and train, etc.) But you know that the fire really did happen. So you know that any attempt you make to go back and prevent it will fail.10 It’s irrational to pursue this sort of doomed plan – a plan that you already know will fail, and the failure of which you take to be 9 Because CDT says that Mary should determine the value of smoking by computing: (cid:1143) c(K)v(KA), which in this case gives us: K VAL (S) = c(SH)v(SH & S) + c(S¬H)v(S¬H & S) CDT Assuming that Mary doesn’t care about dependency hypotheses for their own sakes, v(SH & S) = v(S.H), and v(S¬H & S) = v(S.¬H). (The value of shooting while in a Shoot  Hit world is the value of shooting and hitting; the value of shooting while in a Shoot  Miss world is the value of shooting and missing.) So we get: VAL (S) = c(SH)v(S.H) + c(S¬H)v(S.¬H) CDT And since c(SH) > c(S¬H), it will turn out that VAL (S) > 0, and so VAL (S) > V(¬S). CDT CDT As Lewis (1981) points out, other formulations of causal versions of decision theory deliver the same results. 10 There are complications. Some of these are discussed in Braddon-Mitchell and Egan (MS). 9 worse than the expected result of some alternative plan – and so it’s irrational to try to prevent the fire. (Similarly, when you go back in time to set up a holding company that will, when the investments mature, pay a large lump sum into your bank account, you should arrange for the cash to be deposited in your account after the last time you checked your balance and saw that there hadn’t been any large deposits.) But CDT doesn’t deliver these results. Determining the relative choiceworthiness of actions using only your unconditional credences in dependency hypotheses makes your ranking of actions insensitive to your knowledge – knowledge to which your decision-making should be sensitive – that the past-changing plans are doomed. Oracle cases are relevantly similar. It’s irrational to try to avoid the fate that the (infallible) oracle predicts for you. The thing to do, faced with an unpleasant oracular prediction, is to try to ensure that the predicted fate comes about in the best possible way. If the oracle predicts that you’ll be bitten by a rabid dog, the thing to do is to get vaccinated and wear thick clothes so that the bite won’t do much harm, not to poison your neighbors’ dogs in hopes of avoiding the bite. (It’s worth pointing out that neither the oracle nor the time-travel cases rely on absolute certainty. What’s really going on is that, the more reliable you take the oracle, or your information about the past, to be, the worse an idea it is to try to avert the predicted fate, or change the apparent past.) I include the time travel and oracle cases because (a) they provide particularly stark examples of cases where CDT endorses performing an action that one confidently expects will bring about a worse outcome than some alternative, and (b) they may serve to make clearer just what’s gone wrong in the other cases. In these cases, just as in cases 10

Description:
Some Counterexamples to Causal Decision Theory1 Andy Egan decision theory privileges the agent’s unconditional assignment of credences to
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.