I think the model of the wager is missing something. I think there is a real problem with the infinity part of the model. We do not know the exact quality of heaven or hell. Suppose our prior on the quality of heaven or the badness of hell are, say, exponentially distributed – or any other distribution with a finite mean (maybe we think there’s a good chance heaven is just earth 2 and hell not so great but not outright eternal burning in a lake of fire, conditional on God turning out to exist). Now we have finite expected utilities of each option conditional on the state of the world where God turns out to exist. There is certainly some extra lifetime utility from being a heathen unless we are considering a God that requires nothing more than stated belief (I don’t think any religion has such a God) or we consider “lying” about the fact that we believe in God (when our true prior on his existence is very low) to be costless (likely ashamed to say we believe in something that we think is low probability). So now the 2×2 matrix is (where B = expected utility of heaven, L= lifetime utility from being a heathen, and H=expected cost of punishment):

God exists: Believe = B, Not Believe = L – H

God does not exist: B = 0, Not Believe = L

So an expected utility maximizer chooses to believe in God if and only if for a given probability God exists (P):

P*B >= P*(L-H) + (1-P)*L = L – P*H

Or:

P*(B+H) >= L

So if P is small and L is not zero, even for decently large B and H’s, disbelief in God makes sense for an expected utility maximizer – even given the argument in Pascal’s Wager. Of course, if B or H is infinite, my argument is not true, or if L = 0, but I think that is unlikely.

]]>“‘If you think there’s a >10% chance that there’s a dish in the dishwasher that’s made of china, then you ought to check that the dishwasher is off. You think there’s >10% chance that there’s a dish in the dishwasher that’s made of china. So you ought to check that the dishwasher is off.’ Pascal’s wager doesn’t fallaciously assume characteristics of god any more than this argument fallaciously assumes characteristics of dishes.”

This example does assume that china is too fragile to be run through a dishwasher. If we are mistaken, and this particular set of dishes are actually capable of being put through a dishwasher, then it undermines the premise. Similarly, Pascal’s wager assumes there is a significant chance a God exists that can provide an infinite (or functionally infinite) reward in return for belief.

]]>1. Calling bounded utility functions “ad-hoc” implies that unbounded utility functions are in some sense more natural than bounded utility functions, and that an agent’s behavior in Pascal’s Wager-type situations is not a good test of whether its utility function is bounded. It seems to me that bounded utility functions are more natural, since unbounded utility functions can’t converge everywhere, and thus an agent maximizing an unbounded utility function can’t be VNM-rational unless the probability distributions on which the utility function does not converge are arbitrarily excluded from consideration (also, a utility function is continuous in the strong topology if and only if it is bounded). And I also think that Pascal’s Wager-type situations are good tests of boundedness of utility functions. If you wanted to know whether an agent’s utility function was bounded, how would you do it? It seems to me the natural thing to do would be to take some pair of outcomes in which it is known that the agent prefers one over the other (e.g. status quo versus losing $5), and look for other outcomes that the agent will sacrifice the difference between the original two outcomes for arbitrarily low probabilities of the other outcomes happening or not happening. If there are such extreme outcomes for arbitrarily low probabilities, then the agent seems to have an unbounded utility function, and if you find a low enough probability that there is no sufficiently extreme outcome, then you’ve found a bound on the utility function. This is pretty much what Pascal’s Wager-type thought experiments are doing, and people reliably reject Pascal’s Wager-type arguments, so it seems very natural to conclude from this that their utility functions are bounded.

2. Just because there is a bound does not mean that the bound is achievable. It’s true that bounded utility functions imply that extra units of happiness can have arbitrarily small values compared to avoiding bad outcomes as you approach the bound, but this does not seem counterintuitive to me.

3. I don’t see a problem with utility functions being both bounded and altruistic (in the sense that the well-being of others can account for a large amount of utility). In fact, I don’t think altruism should have much bearing on whether a utility function is bounded.

4. Our preferences are whatever they are. It’s true that using explicit reasoning about utility to help ourselves get good outcomes faces the obstacle that we don’t have an explicit model of our own preferences with 100% confidence (or even close to it), but what to do about this is a tricky question. You can’t take a weighted average over possible utility functions you might have (weighted by probability of being your true utility function), because utility functions are only defined up to scale, so that’s not a well-defined operation, and I don’t think it’s all that relevant that performing such an operation will probably give you an unbounded average utility function if you assign nonzero probability to having an unbounded utility function.

]]>But if you must admit the (infinitesimal) possibility of infinite utility, then I suppose one should wager in favor of it. However, which wager you make is a whole different problem. I don’t feel like you properly addressed the many gods issue. Which one should I pick? Any one? Can I make one up? Infinite utility, if I understand you correctly, would more than make up for the “unlikely-ness” of my invented god right? How can I make a rational choice from infinite, equally valuable/likely choices?

]]>Nope. The theory of cardinals can not apply to probabilities other than 0 and 1 because non-natural real numbers have no corresponding cardinal.

“Utility functions that are bounded above and below can prevent both positive and negative infinite forms of Pascal’s wager. But there are some obvious drawbacks to this response: 1. It’s ad hoc. What other reason do we have to think we don’t have an unbounded concave utility function over happiness (that isn’t’ just a result of our inability to adequately handle large numbers)?”

You can have an unbounded utility function that still doesn’t admit infinite utility. Allowing arbitrary real amounts of utility is different from allowing infinite utility. This passage reinforces my suspicion (triggered by the one above) that the author doesn’t actually understand much about infinity.

]]>In point 10 you state “The near enough strategy isn’t going to work, unless you add the premise that you ought to treat even extreme-utility outcomes that you have a sufficiently low credence in as though you had credence 0 in them. That seems like a bad principle.”

Well, I think it’s fair to assume that, if our credence in the existence of God is not zero, even though it may be near zero, then our credence in the existence of Darth Vader or of Santa Claus is also near zero (but not zero). All evidence suggests thal all of them are characters created by men in specific narratives that can be traced back to specific times and/or specific texts, therefore credence in them can reasonably be treated as being equivalent.

In that case, if you’re not willing to “treat even extreme-utility outcomes that you have a sufficiently low credence in as though you had credence 0 in them”, then you have as much reason to believe in God as you have to believe in “the force” and train with light sabers to become a powerful Jedi and save the universe from absolute evil. ]]>