Your credences are how likely you think something is given your evidence and your priors, and reporting them can be much more useful than reporting beliefs. Telling you that I believe it’s not going to rain is good if I want you to know that an umbrella is not necessary, but it’s bad if you need to know specifically how likely it is that I think it will rain. If, unbeknownst to me, you have left some electronics outside that will be destroyed if it rains, then it’s important for you to know whether there’s a 1% chance of rain or a 20% chance of rain, but my belief report doesn’t tell you this.

In some circumstances it can also be useful to report on more than just your credences. For example, suppose that as I’m walking down the street I meet six people in a row who all tell me that a building four blocks away is on fire. I reasonably assume that some of these six people have seen the fire themselves or that they’ve heard that there’s a fire from different people who have seen it. I conclude that I’ve got good testimonial evidence that there’s a fire four blocks away. But suppose that none of them have seen the fire: they’ve all just left a meeting in which a charismatic person Bob told them that there is a fire four blocks away. If I knew that there wasn’t actually any more evidence for the fire claim than Bob’s testimony, I would not have been so confident that there’s a fire four blocks away.

In this case, the credence that I ended up with was based on the testimony of those six people, which I reasonably assumed represented a diverse body of evidence. This means that anyone asking me what makes me confident that there’s a fire will also receive misleading evidence that there’s a diverse body of evidence for the fire claim. This is a problem of evidential overlap: when several people independently tell me that they have some credence in P, I have a reasonable prior about how much overlap there is in their evidence. But in cases like the one above, that prior is incorrect. (The same issue arises when I have just one person telling me that they have some credence in P. If it turns out that we both have a high credence in P on the basis of completely different evidence, then I should update more towards P than I would if we had identical evidence for P.)

So it’s sometimes useful to transmit not only your credence, but the evidence on which that credence is based. When we update on the credences that other people assert, we are updating both on their reading as a thermometer of the evidence, but also on what we estimate to be the nature of the evidence that they are a thermometer of. One way to avoid mistaken beliefs about the nature of that evidence is to transmit it directly: i.e. for each person to tell me that they are confident that there’s a fire four blocks away because Bob said so.

This isn’t just useful in cases of evidential overlap. To take a different kind of example, suppose that you want to know how likely it is that Alice has asthma. I might think that it’s quite unlikely that Alice has asthma because the base rate is quite low, and tell you that I have a low credence that Alice has asthma. A nurse might also think that it’s unlikely that Alice has asthma because they have tested her for asthma and established that she doesn’t have it, and so they too tell you that they have a low credence that Alice has asthma. Even though our credences might be pretty similar, my credence is not very resilient (resilience is, roughly, how likely you think it is to remain the same upon getting more evidence) while the nurses credence is very resilient. And in order to know how resilient your own credence should be, you need to know how resilient the credences of those whose testimony you are relying on are. In other words, you need to know whether their credence is based on a lot of evidence (like the nurses) or whether their credence is based on very little evidence (like mine).

A final reason to transmit your evidence in addition to your credence is that it lets you calibrate people if you think that they have updated incorrectly and to be calibrated yourself in turn if you have made a mistake. This also lets people see how you update on evidence, and to use this information when they weigh your testimony in the future. For example, if you discover that I have a high prior in cultural relativism or that I update too strongly in response to new evidence, you can use this to calibrate how much weight to give to my testimony going forward.

But transmitting your evidence can be costly. For one, we don’t always have a good sense of what our evidence is, and so we may end up just transmitting things like “I think I read this in a journal article once, but it could have been a newspaper column. Really I just have a general hunch that I read it somewhere.” Or even “I have no idea how I know that there’s danger over there, I just sense it.” I actually think it’s useful to identify cases where we don’t know what our evidence is, as long as people don’t mistake this for “I have no evidence”. A larger down side is that transmitting your evidence takes a lot longer than transmitting your credence does. It requires more reflection and simply takes longer to state, especially if it requires additional hedging (“I’m not completely confident about what my evidence is, but I think it’s x to degree n, y to degree m…”).

So when is it a good idea to transmit your evidence rather than just your credence? The value of transmitting your evidence scales with how important it is for your interlocutor to have an accurate credence about the proposition in question, and also with how atypical your evidence is: how much the evidence that you have for the proposition differs from what someone would reasonably expect if they were to hear your credence. If this is right, then when we ask people to do things like forecast important events or estimate the probability of important claims, we should probably ask not only for their credence but also how resilient their credence is and what evidence it is based on.


Thanks to Max Dalton for making substantive contributions to this post.

Leave a Reply

Your email address will not be published. Required fields are marked *