In this post I respond to some of the common objections to Pascal’s wager, keeping each response to under 100 words!

I am interested in Pascal’s wager, fanaticism problems, and infinite decision theory. In fact, sometimes I’m even foolish enough to mention these topics over dinner. And when I do there are a series of common objections that I get to Pascal’s wager in particular. I think that Pascal’s wager is in fact a very interesting and difficult problem to which there is currently no completely satisfactory solution. In fact, I think that many of the methods used to get around the wager are worse than simply accepting that the argument is, perhaps surprisingly, valid and sound. But people are nonetheless often very confident that the argument is not a good one. So in this post I’m going to quickly run through each of the most common objections to the wager that I’ve been presented with thus far, and explain why (in under 100 words!) I think that none of them are successful.

Okay, so let’s start things off by giving a simple formulation of Pascal’s wager. There are two possible states of the world: (G) God exists, and (~G) God doesn’t exist. Now there are two actions available to you: (B) Believe in God, and (~B) Don’t Believe in God. What should you do? Well, there are four possible outcomes that will occur when you die, so let’s list them and note down the utility of each: B&G: heaven (infinite utility) B&~G: annihilation (0 utility) ~B&G: either hell (infinite suffering) or annihilation (0 utility) ~B&~G: annihilation (0 utility) Treating ~B&G as if it produces 0 utility lets us avoid some nasty features of infinities for now, so I’ll assume it and then mention those below. It should be obvious that the only way to ‘win’ in such a scenario is to believe in God even if you think it’s very unlikely (but not impossible) that God exists. So to lay out the argument behind Pascal’s wager explicitly: (1) You shouldn’t perform actions with lower expected utility over those with greater expected utility. (2) The expected utility of wagering for God is greater than the expected utility of wagering against God. (3) Conclusion: you shouldn’t wager against God.

That’s the basic argument. And boy does it annoy people. Here I’m going to respond to the common objections to the wager (some more sophisticated, some less). BUT so that this post doesn’t take me forever, I’m restricting myself to 100 words (that’s right, 100 words!) per response. I’m happy to go into further details or discuss objections that I haven’t included here in the comments if anyone wants me to. Sometimes it’s easier to understand the reply to one objection if you already know the reply to another, so I’ve tried to put them in an order that takes that into account. I’ve also put [IC] next to the objections currently mentioned on the Iron Chariots blog entry on Pascal’s wager (which you can see here) since it’s a source a few people have mentioned to me when discussing the problem. 

1. There are many gods you could wager for, not just one! [IC] The basic idea: there are n-many gods that reward belief. If there’s a non-zero chance of getting infinite utility if you wager for a different God, then wagering for any of these gods has infinite expected utility (EU). So the wager doesn’t give you any more reason to believe in God X over any of the alternative gods. Simple Answer: Suppose you find yourself standing at the gates of heaven. St Peter offers you one of two options: you can walk through door A and go into heaven, or you can walk through door B and have a 1 in 1,000,000,000 of getting into a heaven and a 999,999,999 in 1,000,000,000 chance of being annihilated. Now you want to get into heaven – it’s not some crummy heaven that you won’t enjoy. Do you really think it’s rational to be indifferent between these two options? If you think you should have even the slightest preference for door A, then this objection doesn’t work.

2. Almost all actions have infinite expected utility if wagering for God has infinite expected utility. So if Pascal’s wager is true then I can do almost anything I want to. The basic idea: If the expected value of believing in god is infinite then any action that has a non-zero chance of ending with me wagering for god is also infinitely valuable. So the wager doesn’t give you any more reason to believe in God than it does to roll a dice and believing in God if 4 comes up, or pick up a beer knowing that it might end with you getting drunk and believing in God. Answer: The same probability dominance argument applies to the mixed strategies objection. Insofar as you think that rolling a dice or drinking a beer has a lower probability of producing the infinite utility outcome (heaven) than some other action does – such as simply wagering for God now – you ought, all things considered, to perform the action with the higher probability of producing the infinite utility outcome. In this case, that means wagering for god rather than employing a mixed strategy.

3. Doesn’t the wager beg the question? [IC] The basic idea: Pascal’s wager assumes key features of the god it seeks to prove the existence of. For example, that god rewards belief and not non-belief. Answer: Firstly, the aim of the wager isn’t to prove existence of god: it’s to establish that belief in god is prudentially/morally rational. Now consider the following argument: ‘If you think there’s a >10% chance that there’s a dish in the dishwasher that’s made of china, then you ought to check that the dishwasher is off. You think there’s >10% chance that there’s a dish in the dishwasher that’s made of china. So you ought to check that the dishwasher is off.’ Pascal’s wager doesn’t fallaciously assume characteristics of god any more than this argument fallaciously assumes characteristics of dishes.

4. What about the atheist-loving god? [IC] The basic idea: Suppose there’s a god that sends all non-believers to heaven and all believers to hell. Given the logic of Pascal’s wager, I ought not to believe in God. Answer: If it’s rational for you to think that disbelief in God (or cars, or hands) will maximize your chance of getting into heaven, then that’s what you ought to do under PW. What’s the evidence for the belief-shunning God? Possibly: ‘Divine hiddenness’ plus God making us capable of evidentialism. The evidence against? God making us capable of performing expected utility calculations, all the historical testimonial evidence for belief-loving Gods. I suspect the latter will outweigh the former. But if you’re making this objection you’re already on my side really: we’re now just quibbling about what God wants us to do.

5. What about infinite utility producing scientific hypotheses? The basic idea: Okay, so Pascal’s wager doesn’t tell us which God to believe in, just to maximize the probability of gaining infinite utility. But what about the possibility of more naturalistic infinite utility hypotheses (singularity, lab universes, etc.)? Answer: Given the response to 1, you ought to perform whatever set of actions that has the highest probability of getting you into heaven. Given this, insofar as a belief in a supernatural being or God is consistent with actions that maximize the chance of a scientific means of gaining infinite utility, you ought to do both regardless of which is more plausible. Also, higher cardinalities of infinite utility will dominate lower cardinalities of infinite utility in EU calculations. And supernatural hypotheses may be more likely to produce higher cardinalities of utility than their empirically-grounded cousins.

6. You can’t quantify the utility of heaven. Answer: The wager doesn’t start by looking at a religious text and trying to work out how good their heaven is. The argument is premised on some infinite-utility outcome being possible, such that you ought to have a non-zero credence in infinite-utility outcomes. It doesn’t matter how inconsistent that outcome is with common conceptions of heaven, as long as it’s in principle possible the argument will go through. You might want to declare that such heavens are absolutely (and not just nomologically) impossible, but it’s hard enough to defend logical omniscience, let alone no-such-thing-as-heaven omniscience.

7. God wouldn’t reward prudentially-grounded belief. [IC] Answer: You have to take your credence that a given God would reward belief into account when calculating what to do. Suppose you are certain that only two gods are possible: A and B. Each of their heavens produce infinite utility, and they’re equally likely to exist. The only way to get into heaven is through belief, but god A might reward prudentially grounded belief while god B doesn’t (all with certainty). Clearly you ought to wager for A. Suppose god B becomes sufficiently more probably. Then perhaps you ought to try to inculcate non-prudentially-grounded belief in yourself and others!

8. I think God’s just as likely to reward belief as to reward non-belief. Answer: Suppose that, for action A that has the potential to produce infinite utility (given all of the possible states of the world), A and ~A are just as likely to produce infinite utility. Then you would need to find a tie-breaker between the two, or flip a coin. This doesn’t undermine the argument of the wager. But it seems highly unlikely that belief and non-belief would have exactly the same rational subjective likelihood of getting you into heaven. What could be the evidential basis for this perfect symmetry?

9. What about the problem of evil, etc? Answer: Evidential considerations for or against a certain god are obviously relevant to what you ought to do or believe, since they are relevant to the likelihood that given actions will produce infinite utility. But Pascal’s wager doesn’t solve (or aim to solve) theological problems like the problem of evil. But its conclusion still holds as long as those problems don’t warrant adopting credence 0 in there being any infinite utility outcome that’s consistent with any action we can perform. It seems unlikely that the standard objections to God’s existence are as devastating as this requires!

10. I have credence 0 (or near enough) in God’s existence. Answer: The near enough strategy isn’t going to work, unless you add the premise that you ought to treat even extreme-utility outcomes that you have a sufficiently low credence in as though you had credence 0 in them. That seems like a bad principle. If you genuinely have credence 0 in all potentially infinite-utility producing states of the world, credence 1 that you have these credences etc. then you are indeed immune to Pascal’s wager. Would it be reasonable to have such credences? This seems implausible under a standard account of credences, since these states of the world appear to be far from impossible.

11. But we don’t have voluntary control over our beliefs! Answer: Are you certain that doxastic voluntarism is false? If not, the chance that your voluntary belief could occur and would result in your getting into heaven ought to be taken into account when you’re trying to determine what you ought to do and belief (constructing the full decision procedure for maximizing your chance of gaining infinite utility an interesting task!). But suppose you’re certain that doxastic voluntarism is false: you still ought to try to convince others of God’s existence, give money to organizations that try to do this, etc. The argument would simply support a different set of actions.

12. The wager ignores the disutility of believing in God and the utility of not believing in God. [IC] Answer: The wager doesn’t ignore either of these: they simply don’t affect the act or belief that it is rational for you to perform or adopt. Suppose that the annoyance of wagering for god is like continuous torture for you. And suppose the utility of not believing in god is extremely pleasurable for you. You still ought to wager for god, since infinite expected utility swamps any finite (dis)utility. Even if the utility of both is infinite (see 2), it’s still probability and not finite utility considerations that determine whether or not you ought to wager.

13. Dammit Jim, it’s just not scientific! Answer: The wager doesn’t give evidence for god:it’s a moral/prudential argument for belief. The view that your beliefs always ought to be in accordance with your evidence is powerful and useful, but should we be certain that it’s true, and that there are never prudential reasons to hold a belief? If not, then the full force of Pascal’s wager returns, since any non-zero credence that there are prudential reasons for belief is enough to let infinite utility back in. Even if you could be rationally certain in this norm, however, it just changes the actions Pascal’s wager warrants (see 11).

14. That’s not how the maths works. Answer: Pascal’s wager appeals to the claim that a finite, nonzero chance of getting an infinitely good outcome is better than any probability of a finitely good outcome. We can appeal to something like Bartha’s relative utility theory to get both this result and the result that a greater chance of an infinite outcome is better than a lower chance of the same outcome. It would be somewhat surprising if our accounts of infinity (e.g. hyperreals, surreals) were in conflict with either of these claims. In theories where infinities can be multiplied by finite, nonzero numbers, they tend to produce infinities.

15. The only reason you’d believe this is because you want to believe in God anyway. Answer: There’s a class of responses to the wager that bring into question your motives for defending the argument in the first place. I don’t really think that motives like wanting to believe in god have much bearing on the efficacy of the argument, but they should probably give you reason to doubt my weighing of the arguments and evidence, etc. But I don’t have such motives. I came to this through intellectual curiosity, though I don’t think that means that I’ll end up finding the conclusions unmotivating.

16. Doesn’t the wager promote an unethical life of belief over an ethical life of non-belief? Answer: In principle the wager could promote this. But I don’t see any reason to think that this is overwhelmingly likely. It doesn’t necessarily favor adopting a given religion unflinchingly. And if we are more confident that god is more non-malevolent than not and that we haven’t been grossly mislead about the nature of moral truth, then we have strong reasons to act morally. The wager applies to actions as well as beliefs, so even if you think you’ll be ‘forgiven’ for a certain action it’s unlikely that under PW it’ll be worth performing an action you are confident is wrong.

17. The wager is only valid because there are problematic features of infinities. Answer: The infinite version of Pascal’s wager relies on features of infinities: e.g. that the expected value of an infinitely good outcome is will not be finite. But the uncertainty argument will apply even if you’re pretty certain these features won’t be present in the correct account of infinities. I don’t think that our worries about the wager give us sufficient reason to reject these principles of infinities. In principle we could reformulate much of wager by simply appealing to sufficiently large finite amounts of utility, but Pascal’s wager seems to be consistent with features of infinities that we are happy with in other domains.

18. What if we have bounded utility functions? Aren’t unbounded utility functions problematic? Answer: Utility functions that are bounded above and below can prevent both positive and negative infinite forms of Pascal’s wager. But there are some obvious drawbacks to this response: 1. It’s ad hoc. What other reason do we have to think we don’t have an unbounded concave utility function over happiness (that isn’t’ just a result of our inability to adequately handle large numbers)? 2. Counterintuitive results at the point where a unit of happiness has no extra value for us, 3. Might not work for non-preference forms of utilitarianism (moral PW argument) and 4. We shouldn’t be certain that out utility function is bounded.

19. If we allow ourselves to be skeptical about mathematical and normative principles, we’ll end up skeptical about everything! Answer: I don’t think this is the case, for a couple of reasons. Firstly, the question is a bit misleading. The uncertainty I’ve appealed to here isn’t mathematical uncertainty (though I think we can appeal to that as well in some cases) it’s normative uncertainty. And it’s not really skepticism, it’s just taking into account that we shouldn’t be certain that a given normative principle (mentioned in 1) is true. If we end up uncertain about everything like this, I don’t think that would be a bad thing. However, I’ll try to discuss objections to this view in another post.

20. Isn’t this just a reductio of expected utility theory? Answer: I think that the existence of fanaticism problems presents a huge worry for expected utility theorists who allow unbounded utility functions. In fact, I’m surprised people haven’t written this up as an impossibility theorem with an anti-fanaticism axiom, since it seem you have to either accept the wager, accept other problematic conclusions, or give up on some plausible aspect of unbounded expected utility theory. I don’t think the reductio worry helps people who don’t want to buy Pascal’s wager though, since it doesn’t warrant acting as if some fanaticism-avoiding decision theory were true.

So there you have it: the reasons why – in under 100 words- I’m not satisfied by any of the common responses to Pascal’s wager. If you’re reading this and have any comments/objections or spot any errors (I was quite tired when I wrote this!) please do let me know.

15 thoughts on “Common Objections to Pascal’s Wager

  1. Good article on objections to the Wager. Give some good bases for addressing people who try to resist the forcefulness of its implications and reasonableness. Why no identification of the author though?

  2. Great post! I for one am partial to the “bounded utility function” and “this is a reductio of EU-theory” camps.

    I particularly love that you came to this from a non-religious background. It proves that you have the integrity to go where the ideas take you rather than the other way round. The world needs more of that, on both sides of every debate.

  3. Hi Amanda,

    I’d like to raise a few difficulties for the wager. To simplify a bit, I will stipulate that “God exists” is equivalent to “there is an omnipotent, omniscient, morally perfect person”, and “there is at least a god” is equivalent to “there is a person that has the power give you infinite rewards or infinite harms” (so in particular, if God exists, then she would be a god).
    I’ll limit my objections to 100 words. Each of them but the first assumes there is a way around the previous ones; objections 4. and 5. might be taken as quibbling about what a god wants us to do, but they’re more about which god to believe in, and they’re only raised assuming 1., 2., and 3. (and a number of other potential objections) fail.

    1. Infinite negative utility: How do we go about making assessments in that context?
    By the way, let’s say that if I pick door A, there is a 2/3 chance that I will get an infinite positive utility of some kind, and a 1/3 chance that I’ll be tortured for eternity. At least for some tortures, it seems B is the rational choice for me, regardless of what the reward is. But why should we stop considering potentially increasingly bad potential tortures?

    2. Let’s say we accept the wager and believe in god G1, and we believe that if we keep our belief, we’ll get an infinite positive utility of cardinality C1.
    Shouldn’t we reckon that there is a small nonzero chance that we picked the wrong god, and that by dedicating all of our time to improving our choice, we would increase our odds at a positive infinite utility and/or avoiding a negative one, even if it would make our Earthly life miserable since we would have nothing beyond our dedication to this matter?

    3. There seems to be no good reason to limit the wager to beliefs. We may consider all sorts of actions. Suppose we believe that God exists. Then, isn’t she more likely to reward us if we dedicate all of our lives to the fight for the greater good, doing supererogatory things all the time, and nothing else (e.g., no movies, sports, TV, etc.)
    Maybe it doesn’t work for other gods. But for each god (ever pretty generic ones) one might pick, I think we can (probably) come up with a similar argument.

    4. You say “God making us capable of performing expected utility calculations, all the historical testimonial evidence for belief-loving Gods.” count for a belief-loving god, but I think there are difficulties:
    a. Even if a god made us capable of that, without betting also on a motivation on her part, that doesn’t look like evidence that said god wants us to bet on her existence.
    b. When many religions are pairwise mutually incompatible, at least all but one are false, and so the historical testimonial from each of them does not seem to matter…except for at most one or some mutually compatible ones, but how do we know there is at least one correct one at this point in history, and in that case, how do we know which one?

    5. For most of the time humans have existed, Christianity, Islam, etc., did not exist. Yet, there are versions of those religions in which unbelievers get punished forever. What about the chance that the true religion will exist in the future but doesn’t yet exist, and then the true god will reward believers, annihilate atheists, agnostics, etc., and torture believers in false religions forever? How do we assess the chances of that?

  4. Hi Amanda,

    I’d like to elaborate on a somewhat more sophisticated version of the “atheist god” sort of reply, but I’m afraid it’s longer. If that’s okay with you, here’s the reply:
    Let’s say Alice is credibly threatened and she’s told that she and her family will all be tortured horribly and killed unless she assigns a high probability to the claim that there is a dragon in the garage. It seems to me (given more details if needed) that it is means-to-end rational (i.e., given what she values, her goals, etc.) for Alice to believe that there is a dragon (at least, assuming she can bring herself to believe it), but it would still be epistemically irrational for her to believe that; in particular, the probability that she epistemically should assign to the dragon hypothesis remains extremely low (yes, there are some who would object to this; we cab discuss it if you like, but I don’t want to make this too long).
    Here’s an atheist reply that is consistent and is not nitpicking about what a god wants us to do (more on that below):
    Atheist (say, Bob): “I reckon that based on the information available to me (including empirical data, a priori arguments, etc.), I epistemically ought to assign a low probability to the hypothesis that a god exists, and almost zero to the hypothesis that God exists.
    While I assign nonzero chances to the hypothesis that some god exists and will reward or punish me forever for my beliefs, I don’t see any good reason to think that gods that demand epistemically irrational belief/credence in their existence are collectively more likely than others, or more likely to inflict worse punishments, etc.
    Religions that demand belief under threat of infinite punishment (or to get infinite reward, etc.) claim or imply that said belief would be epistemically rational. They’re all mistaken about that point, at least when applied to my epistemic situation. Additionally, they’re mutually incompatible, and/or they are extinct, etc.
    On the basis of that, the fact that no gods make themselves known, etc. (and several other reasons), I reckon that (independently of other considerations) epistemically irrational belief in the existence of gods is not more likely to improve the expected utility (given infinite punishments, etc.) for me than remaining epistemically rational.
    So, I remain an atheist, and I’m not quibbling about what a god wants us to do, though I do think about what’s more likely that she’d want us to do if there were one – what I consider improbable. I remain an atheist.”

    Granted, a theist might argue (on other grounds; e.g., a cosmological argument) that it’s epistemically rational to believe, etc., but then, that’s no longer Pascal’s Wager – the debate has moved to other arguments.
    I don’t think Bob, who remains an atheist, is on your side only for discussing the evidence for or against infinite punishments or rewards depending on his beliefs, unless you construe “on my side” in a way that doesn’t require him to change his beliefs – but in that case, it’s not a side that troubles Bob.
    Alternatively, an atheist might endorse Bob’s reply as a backup reply under the assumption that other worries about the Wager can be defeated, but without taking a stance on whether they can in fact be defeated.

  5. I think if I were an atheist, I would be repelled by the idea of Pascal’s Wager on something like these grounds, something kind of Nietzschean:

    I believe in intellectual integrity, and for me, even if the utility of intellectual integrity in usual terms (“happiness”? “flourishing”?) were some finite thing. For me, because it is who I am, or simply is me being alive as a thinker, or because it could someday connect me with Being or “what is”, or something like that, it is an absolute, of infinite value.

    Each of the states of belief (the Bs or ~Bs) contain a risk of intellectual “dis-integrity”. It depends on how I acquired the belief. Now, I think I would find something untrustworthy about a process of acquiring belief that was motivated by personal gain. Perhaps I wouldn’t trust myself. A process of acquiring belief that has any kind of agenda other than acquiring true belief seems either to lack intellectual integrity or to be at great risk of intellectual dis-integrity.

    So, B, if it is acquired without intellectual integrity, produces a “negative infinity” of utility both under G and ~G. If the risk of dis-integrity is high enough (perhaps it is determined by personal experiment to be 1), then it is definitely not worth it to adopt B, even at the risk of the torments of hell or the non-enjoyment of heaven.

    Does it make sense to assign infinite value to intellectual integrity? Why not? Nietzsche seems to think you should do that or something like it. Why not assign infinite value to anything whatever? Intellectual integrity might be your best bet for properly assigning infinite value to things.

    As a theist, I am repelled on a (somewhat? rather?) similar ground, but I’ll associate it with Simone Weil rather than Nietzsche.

    If I want to know God, how can I do that? I don’t care about heaven and hell too much, what’s important to me is that I know God. What I really want out of the search for God is communication, not some kind of abstract good. I can’t get communication on my own, it requires some other being to take some kind of initiative in contacting or responding to me. All I can really do is wait. Pragmatism is self-defeating.

    (Pragmatism to a point may not be self-defeating. Perhaps you can get rid of impediments to seeing the truth. But Pascal’s wager seems a bit more absolute than that. Infinite utility seems to merit all-out effort. It could motivate any behavior, including self-deception, or filling my own communicative channels with fake, thus non-, communication.)

    This is an argument against any proof of God, I suppose, or anything else that “I work” in order to “make God exist”, not in the sense of disproving the proofs, but in the sense of showing them to be “unpragmatic”. Pascal’s wager is pragmatic, but, according to what kind of aims? According to heaven as something that doesn’t require communication with God. But what if we require the heaven of communication with God? Or we don’t even care about heaven?

    (Can we say one goal is better than another?)

    (I’m ignorant of the specifics of expected utility theory and can imagine that what I’ve written actually falls under #20 in some way.)

  6. Hi, Amanda
    In point 10 you state “The near enough strategy isn’t going to work, unless you add the premise that you ought to treat even extreme-utility outcomes that you have a sufficiently low credence in as though you had credence 0 in them. That seems like a bad principle.”
    Well, I think it’s fair to assume that, if our credence in the existence of God is not zero, even though it may be near zero, then our credence in the existence of Darth Vader or of Santa Claus is also near zero (but not zero). All evidence suggests thal all of them are characters created by men in specific narratives that can be traced back to specific times and/or specific texts, therefore credence in them can reasonably be treated as being equivalent.
    In that case, if you’re not willing to “treat even extreme-utility outcomes that you have a sufficiently low credence in as though you had credence 0 in them”, then you have as much reason to believe in God as you have to believe in “the force” and train with light sabers to become a powerful Jedi and save the universe from absolute evil.

    1. I agree! For what it’s worth, I don’t think Pascal’s wager actually tells you what to do. I suspect that the correct decision theory for dealing with these cases will say things like ‘try to perform actions with a greater probability of generating infinite value than those with a lower probability of generating infinite value’, but this doesn’t mean that religious hypotheses will win out. Pascal’s wager has been traditionally used to support religious hypotheses, but I think it’s worth noting that the argument itself is neutral about what kind of action is best.

  7. “Answer: Pascal’s wager as I’ve described it employs the standard mathematics of infinite cardinals plus standard expected utility theory and uncertainty across normative principles.”

    Nope. The theory of cardinals can not apply to probabilities other than 0 and 1 because non-natural real numbers have no corresponding cardinal.

    “Utility functions that are bounded above and below can prevent both positive and negative infinite forms of Pascal’s wager. But there are some obvious drawbacks to this response: 1. It’s ad hoc. What other reason do we have to think we don’t have an unbounded concave utility function over happiness (that isn’t’ just a result of our inability to adequately handle large numbers)?”

    You can have an unbounded utility function that still doesn’t admit infinite utility. Allowing arbitrary real amounts of utility is different from allowing infinite utility. This passage reinforces my suspicion (triggered by the one above) that the author doesn’t actually understand much about infinity.

    1. Yeah, I probably shouldn’t have mentioned Cantorian cardinalities here because they extend the naturals rather than the reals. Really we would use something like the surreals – I’ve edited it to reflect that. All we need is that if value of the action is the value of the outcomes multiplied by the probability of the outcomes, then anything that gives you an output when your outcome space includes non-zero, non-infinitesimal probabilities of infinitely valued outcomes won’t assign finite value to the action. On your second point, my argument wasn’t that unbounded utility functions will necessarily result in infinite utility being assigned to actions, but rather that you can prevent infinite utility being assigned to actions by appealing to bounded utility functions. (For what it’s worth, I wrote this post very quickly a few years ago. There are definitely errors in it, but I’ve been reticent to change it too much because it feels a bit dishonest. I think the core ideas are probably fine, but I wouldn’t necessarily stand by the details.)

  8. There does not appear to be any compelling evidence to believe Infinite Utility (or God) can/does exist. Unfortunately, I am already stuck with this brain that only believes things it actually thinks are true or likely to be true. Shall I use my rationality to convince myself into brainwashing myself into believing in God? Scandalous.

    But if you must admit the (infinitesimal) possibility of infinite utility, then I suppose one should wager in favor of it. However, which wager you make is a whole different problem. I don’t feel like you properly addressed the many gods issue. Which one should I pick? Any one? Can I make one up? Infinite utility, if I understand you correctly, would more than make up for the “unlikely-ness” of my invented god right? How can I make a rational choice from infinite, equally valuable/likely choices?

  9. I think before you can even get started here you’ll have to say how we’re going to do decision theory with infinites. Each action available to me has some expected utility which is a surreal number, right? Now, am I assuming there are only finitely many actions available to me?

  10. Bounded utility functions is my preferred answer to Pascal’s Wager, and I did not find your response to it convincing.

    1. Calling bounded utility functions “ad-hoc” implies that unbounded utility functions are in some sense more natural than bounded utility functions, and that an agent’s behavior in Pascal’s Wager-type situations is not a good test of whether its utility function is bounded. It seems to me that bounded utility functions are more natural, since unbounded utility functions can’t converge everywhere, and thus an agent maximizing an unbounded utility function can’t be VNM-rational unless the probability distributions on which the utility function does not converge are arbitrarily excluded from consideration (also, a utility function is continuous in the strong topology if and only if it is bounded). And I also think that Pascal’s Wager-type situations are good tests of boundedness of utility functions. If you wanted to know whether an agent’s utility function was bounded, how would you do it? It seems to me the natural thing to do would be to take some pair of outcomes in which it is known that the agent prefers one over the other (e.g. status quo versus losing $5), and look for other outcomes that the agent will sacrifice the difference between the original two outcomes for arbitrarily low probabilities of the other outcomes happening or not happening. If there are such extreme outcomes for arbitrarily low probabilities, then the agent seems to have an unbounded utility function, and if you find a low enough probability that there is no sufficiently extreme outcome, then you’ve found a bound on the utility function. This is pretty much what Pascal’s Wager-type thought experiments are doing, and people reliably reject Pascal’s Wager-type arguments, so it seems very natural to conclude from this that their utility functions are bounded.

    2. Just because there is a bound does not mean that the bound is achievable. It’s true that bounded utility functions imply that extra units of happiness can have arbitrarily small values compared to avoiding bad outcomes as you approach the bound, but this does not seem counterintuitive to me.

    3. I don’t see a problem with utility functions being both bounded and altruistic (in the sense that the well-being of others can account for a large amount of utility). In fact, I don’t think altruism should have much bearing on whether a utility function is bounded.

    4. Our preferences are whatever they are. It’s true that using explicit reasoning about utility to help ourselves get good outcomes faces the obstacle that we don’t have an explicit model of our own preferences with 100% confidence (or even close to it), but what to do about this is a tricky question. You can’t take a weighted average over possible utility functions you might have (weighted by probability of being your true utility function), because utility functions are only defined up to scale, so that’s not a well-defined operation, and I don’t think it’s all that relevant that performing such an operation will probably give you an unbounded average utility function if you assign nonzero probability to having an unbounded utility function.

  11. I enjoyed this piece. Thanks for sharing your thoughts.

    “‘If you think there’s a >10% chance that there’s a dish in the dishwasher that’s made of china, then you ought to check that the dishwasher is off. You think there’s >10% chance that there’s a dish in the dishwasher that’s made of china. So you ought to check that the dishwasher is off.’ Pascal’s wager doesn’t fallaciously assume characteristics of god any more than this argument fallaciously assumes characteristics of dishes.”

    This example does assume that china is too fragile to be run through a dishwasher. If we are mistaken, and this particular set of dishes are actually capable of being put through a dishwasher, then it undermines the premise. Similarly, Pascal’s wager assumes there is a significant chance a God exists that can provide an infinite (or functionally infinite) reward in return for belief.

  12. I like your replies to the objections to your model of Pascal’s wager, but:

    I think the model of the wager is missing something. I think there is a real problem with the infinity part of the model. We do not know the exact quality of heaven or hell. Suppose our prior on the quality of heaven or the badness of hell are, say, exponentially distributed – or any other distribution with a finite mean (maybe we think there’s a good chance heaven is just earth 2 and hell not so great but not outright eternal burning in a lake of fire, conditional on God turning out to exist). Now we have finite expected utilities of each option conditional on the state of the world where God turns out to exist. There is certainly some extra lifetime utility from being a heathen unless we are considering a God that requires nothing more than stated belief (I don’t think any religion has such a God) or we consider “lying” about the fact that we believe in God (when our true prior on his existence is very low) to be costless (likely ashamed to say we believe in something that we think is low probability). So now the 2×2 matrix is (where B = expected utility of heaven, L= lifetime utility from being a heathen, and H=expected cost of punishment):

    God exists: Believe = B, Not Believe = L – H
    God does not exist: B = 0, Not Believe = L

    So an expected utility maximizer chooses to believe in God if and only if for a given probability God exists (P):

    P*B >= P*(L-H) + (1-P)*L = L – P*H

    Or:

    P*(B+H) >= L

    So if P is small and L is not zero, even for decently large B and H’s, disbelief in God makes sense for an expected utility maximizer – even given the argument in Pascal’s Wager. Of course, if B or H is infinite, my argument is not true, or if L = 0, but I think that is unlikely.

Leave a Reply

Your email address will not be published. Required fields are marked *