Sometimes when people publicly give to charity or adopt a vegan diet or support a cause like Black Lives Matter, they get accused of ‘virtue signaling’. This is a criticism that’s always bothered me for reasons that I couldn’t quite articulate. I think I’ve now identified why it bothers me and why I think that we should avoid blanket claims that someone is ‘signaling’ or ‘virtue signaling’. In order to make things clear, I’m going to give a broad definition of signaling and note the various ways that one could adjust this definition. I’m then going to explain what I think the conditions are for signaling in a way that is morally blameworthy and the difficulties involved in distinguishing blameworthy signaling from blameless behavior that is superficially similar.

In order to discuss signaling with any clarity, we need to try to give some account of what a signal is. The term was originally introduced by Spence (1973) who relied on an implicit definition. I’m not a fan of implicit definitions, so I’m going to attempt to give an explicit definition that is as broad and clear as I can muster, given that ‘signal’ and ‘signaling’ are now terms in ordinary parlance as well as in different academic fields.

A signal is, at base, a piece of evidence. But we don’t want to call any piece of evidence a signal. For one thing, a signal is usually sent by one party (the sender) and received by another (the recipient), so it’s communicated evidence. Moreover, the evidence is usually about a property of the sender: I can signal that I’m hungry or that I’m a good piano player or that I like you, but not that the sky is blue or that it will rain tomorrow (we can imagine an even broader definition of a signal that includes these, but let’s grant this restriction). And we communicate this evidence in various ways: by, for example, undertaking certain actions, saying certain things, or having certain properties. Putting all this together, let’s give the following definition of a signal:

A signal is an action, statement, or property of the sender that communicates to the receiver some evidence that the sender has some property p

Note that, under this broad definition, ‘trivial signals’ and ‘costless signals’ are possible: we can signal that we have a property by simply having it. We can also signal things at no cost to ourselves. I don’t think this is a problem: most of the signals we’re interested in just happen to be non-trivial signals or costly signals (e.g. incurring a cost to convey private information).

Of course, one way we give information about ourselves is by simply telling people. If I’m hungry, I can turn to you and say “I’m hungry”. In doing so, I give you testimonial evidence that I’m hungry. Because you’re a good Bayesian, how much you will increase your credence that I’m hungry given this evidence depends on (a) how likely you think it is that I’m hungry (you’re less likely to believe me if I just ate a large meal) and (b) how likely you think it is that I’d say I’m hungry if I wasn’t (you’re less likely to believe me if I have a habit of lying about how hungry I am, or if I have an incentive to lie to you in this case). And sometimes my testimony that I have a given property just won’t be sufficient to convince you to a high enough degree that I have the property in question. For example, if I’m interviewing for a job, it’s probably not sufficient for me to say to you “trust me, I know python inside and out” because it’s not that common to know python inside and out, and I have a strong incentive to deceive you into thinking I know more about python than I actually do. (As a side note: this gives us a strong incentive to adopt fairly strong honesty norms: if you’re known to be honest and accurate about your abilities and properties even when you have an incentive to lie, you’ll have to rely less on non-testimonial signals of those abilities and properties.)

I know that others (like Robin Hanson, in this post) want to exclude direct testimony of the form “I have property p” as a signal. We could exclude this by adding to the definition the condition that “I have property p” isn’t the content of the agent’s assertion, but I think this is unnecessarily messy: it’s just that we’re less interested in signals that are given via direct testimony. Also, some cases of signaling do seem to involve assertions of this sort. If I find it very difficult to tell people I love them, then the act of saying “I love you” may be a very credible signal that I love you. It also happens to be the primary content of my assertion.

In cases where we can’t sufficiently raise someone’s credence that we have some property p with our testimony alone, they require additional evidence that we have the property to be sufficiently confident that we have it (where the property itself may be a gradational one: e.g., that I’m competent in python to degree n, and not just that I am competent in python simpliciter). In such cases, we need to provide additional evidence that we have the property. For example, I can give you evidence of my competence in python by showing you my university transcripts, or simply by demonstrating my abilities. When I do so, I raise your credence that I am competent in python to the degree that you would require to give me a job, which I wasn’t able to do with testimony alone.

In scenarios like this, there’s an optimal credence for you to have in “Amanda has property p” from my perspective, and there’s an optimal credence for you to have in “Amanda has property p” from your perspective. You — the receiver — probably just want to have the most accurate credence that I have property p. Sometimes it’s going to be in my interest to communicate evidence that will give you a more accurate credence (e.g., if I genuinely know python well, I want to communicate evidence that will move you up from your low prior to one that is more accurate), but sometimes I want to make your credence less accurate (e.g., if I don’t know python that well, but I want to convince you to give me the job). Let’s say that the sender value of a signal is how valuable the resultant credence change is to the sender, and the accuracy of a signal is how much closer the signal moves the receiver towards having an accurate credence.

Hanson argues that we cannot signal that we have properties that are ‘easy to verify’ because if a property is easy to verify, then it is cheap for the receiver to check whether my signal is accurate. I think that it will often be less rational to send costly signals of properties that are easy to verify, but I don’t think we should make this part of the definition of a signal. Suppose that I am in a seminar, and I ask a naive question that any graduate student would be afraid to ask because it might make them look foolish. As a side effect of this, I might signal (or, rather, countersignal) that I am a tenured professor. Such a thing is easy enough to verify: someone could simply look up my name on a faculty list. So if my primary goal was to signal that I was a tenured professor, there are easier methods available to me than to ask naive questions in seminars. But we can signal something even doing so is not our primary goal. And this seems like a genuine instance of signaling that I am a tenured professor, despite the fact that this information is easily verifiable.

Finally, signals sometimes involve costs to the sender. Hanson argues that costly signals are required in cases where a property is more difficult to verify or cannot be verified soon. I think the details here are actually rather tricky, but one thing we can say is that the costlier it is for any receiver to verify that I have a given property, the higher that the minimum absolute cost for sending a true signal is going to be. It doesn’t follow that sending the signal will be net costly to me, just that the absolute cost will be higher. For example, suppose that to be deemed competent as a pilot you need to do hundreds of hours of supervised flying (i.e., you can’t just take a one-time test to demonstrate that you’re a competent pilot). The property ‘is a competent pilot’ is then quite hard to verify, and so the cost of sending a true signal involves hundreds of hours of supervised flying. But if I love flying and am more than happy to pay the time and money cost to engage in supervised flying, then the net cost to me to send the signal might be negligible or even zero, even though the absolute costs are quite high.

So far I have argued that a signal can simply be understood as an action, statement, or property of the sender that communicates to the receiver some evidence that the sender has some property p. Such signaling will be rational if the cost to the sender is greater than the benefit that they will acquire by sending the signal. But one remaining question is whether signaling must be consciously or unconsciously motivating. By ‘motivating’ I just mean that the benefits of sending the signal are part of the agent’s reasons for undertaking a given action (e.g., doing something, speaking, acquiring a property). We might be unconsciously motivated by the signal value of something: for example, I might think that I’m playing the flute because I love it, even though I am unconsciously motivated by a desire to appear interesting or cultured. We can also be motivated to greater or lesser degrees by something: for example, it might turn out that if I could never actually demonstrate my flute-playing abilities to others, then I’d only reduce my flute-playing by 5%, in which case only 5% of my flute-playing was motivated by the signal value it generated.

I’m going to assume that signaling doesn’t require being motivated by signal value. This means that my signaling something can be a side-effect of something I would do for its own sake. Some people might think that in order for me to be ‘signaling’, sending the signal must be a sufficient part of my conscious or unconscious motivation. For example: it must be the case that they would not undertake the action were it not for the signaling value it afforded. If this is the case, then 5% of my flute playing would by signaling in the case above, while 95% of my playing would not be signaling. I can foresee difficulties for views that have either a counterfactual or threshold motivational requirement for signaling, and so I’m going to assume that I can signal without being motivated by signal value. The reader can decide whether they would want to classify unmotivated signaling as signaling (and economists seem to reserve the term for signals that are both motivated and costly).

I think we can now divide signaling into four important categories that track how accurate the signal is (i.e., whether the sender actually has the property to the relevant degree) and how motivated the agent is by the signal value. I’ll label these as follows:

Innate signaling involves sending an accurate signal without being consciously or unconsciously motivated by sending the signal. If a child is hungry and eats some bread from the floor for this reason alone, then she is innately signaling hunger to anyone who sees her.

Honest signaling involves sending an accurate signal that one is consciously or unconsciously motivated by. If a child is hungry and eats some bread from the floor to show her parents that she is hungry, then she is honestly signaling hunger.

Deceptive signaling involves sending an inaccurate signal that one is consciously or unconsciously motivated by. If a child is not hungry and eats some bread from the floor to get her parents to believe that she is hungry and give her sweets, then she is deceptively signaling hunger.

Mistaken signaling involves sending an inaccurate signal that one is not consciously or unconsciously motivated by. If a child is not hungry and eats some bread from the floor because she is curious about the taste of bread that has fallen on the floor, then she is mistakenly signaling hunger to anyone who sees her.

Since motivation and accuracy come in degrees, signaling behavior comes on a spectrum from more honest to more innate, and more deceptive to more mistaken, and so on. (If you think that agents must be consciously or unconsciously motivated to send a signal in order for them to be signaling, then innate signaling and mistaken signaling will not be signaling at all. I have shaded these darker in the diagram above to reflect this.)

So when is it unethical or blameworthy for agents to engage in signaling? It seems pretty clear that innate signaling will rarely be unethical or blameworthy. If an agent innately signals that she is selfish, then we might think that she is unethical or blameworthy for being selfish but not that she is unethical or blameworthy for signaling that she is selfish. The same is true of mistaken signaling. If an agent is not negligent, but mistakenly signals something that is not true — for example, she appears more altruistic than she is because someone mistakes a minor act of kindness on her part for a great sacrifice — then we presumably don’t think that she is responsible for accidentally sending inaccurate signals to others. We might think that she can be blamed if she is negligent (e.g., if she had the ability to correct the beliefs). But if her actions were not consciously or unconsciously motivated by their signal value, then we’re unlikely to think that she can be accused of signaling in a way that is unethical.

If this is correct, then most occasions on which we think that agents can be aptly blamed for signaling are when these agents are motivated in whole or in part by the signal value of their actions (in other words, even if we do think that innate signaling and mistaken signaling are possible, we don’t think that they’re particularly blameworthy). But things are tricky even if we focus on motivated signaling, because we have already said that an agent can be consciously or unconsciously motivated by the value of sending a signal. Let’s focus only on motivated signaling and adjust our y-axis to reflect this distinction:


The more that a behavior involves conscious deceptive signaling, the less ethical it is, all else being equal. This is because conscious deceptive signaling involves intentionally trying to get others to believe things that are false, which we generally consider harmful. If I become a vegetarian in order to deceive my boss into thinking that I share her values when I don’t, then the motives behind my action are blameworthy, even if the action itself is morally good.

Unconscious deceptive signaling seems less blameworthy. Suppose that I’m a deeply selfish person but help my elderly aunt once a week. Without realizing it, I’m actually doing this in order to mitigate the evidence others have that I’m selfish. This isn’t as blameworthy as conscious deception, but we might want to encourage people to avoid sending deceptive signals to others. And so here we might be inclined to point out to someone that they are in fact deceiving people, even if they are not doing so consciously.

As I mentioned above, signals can be deceptive to greater or lesser degrees. For example, suppose that I give 10% of my income to charity, but that if I were to suddenly not gain at all personally from being able to signal my charitable giving I would only give 8% of my income to charity. Suppose that giving 10% signals “I am altruistic to degree n” and giving 8% signals “I am altruistic to degree m“, where n>m. Let’s call a trait ‘robust’ insofar as one has the trait even if they were to lose the personal gain from signaling that they have it (this is distinct from the counterfactual of not being able to signal at all, since signaling can have moral value). The deceptive signal that people receive is “Amanda is robustly altruistic to degree n” when the truth is that I am only robustly altruistic to degree m. If this is the case, then my signal is much less deceptive than the signal of someone who would give nothing to charity if it were not for the self-interested signaling value of their donations.

Finally, what about honest signaling? Honest signaling cannot be criticized on the grounds that it is deceptive, but we might still think that honest signaling can sometimes be morally blameworthy. For example, suppose that I were to give 10% of my income to charity and, when asked about it, was explicit that I thought that if I wouldn’t personally benefit from telling people about my giving, I’d only give 8% of my income to charity. I haven’t attempted to deceive you in this case. Nonetheless, we might think that being motivated by self-interested signaling value is morally worse than being motivated by the good that my charitable giving can do because the latter is more robust than the former (the former is sensitive to things like the existence of Twitter or an ability to discuss giving among friends, while the latter is not). I suspect that this is why honest conscious signaling causes us to think that the agent in question has “one thought too many”, while unconscious honest signaling still makes us feel like the person’s motivations could be better, insofar as we don’t think that being motivated by signaling value is particularly laudable.

Note that this criticism only seems apt in domains where we think that self-interest should not be an undue part of one’s motivations: i.e., in the moral domain. We are not likely to chide the trainee pilot if she pays $100 to get a certificate showing that she has completed her training because this is a domain in which self-interest seems permissible. Similarly, the criticism only seems apt if the agent is motivated by the value of the signal for her. If someone advertises their charitable donation to normalize donating and encourage others to donate, then they are motivated by the moral value of their signal and not by its personal value. This motivation does not seem morally blameworthy.

If I am correct here, then critical accusations of signaling can be divided into two distinct accusations: first, that the person is being consciously or unconsciously deceptive, and second, that the person is being motivated by how much sending a signal benefits them personally, when this is worse than an alternative set of motivations: i.e., moral motivations. Since this can be consciously or unconsciously done, the underlying criticisms are as follows:

(1) Conscious deceptive signaling: you are consciously generating evidence that you have property p to degree n, when you actually have property p to degree m, where m ≠ n

(2) Unconscious deceptive signaling: you are unconsciously generating evidence that you have property p to degree n, when you actually have property p to degree m, where m ≠ n

(3) Conscious self-interested motivations: you are being consciously motivated by the personal signal value of your actions rather than by the moral value of your actions

(4) Unconscious self-interested motivations: you are being unconsciously motivated by the personal signal value of your actions rather than by the moral value of your actions

Note that if an agent is signaling honestly then she can only be accused of (3) and (4), but if she is signaling dishonestly then she can be accused of (1), (2), (1 & 3) or (2 & 4).

Claims that one is doing (3) or (4) only arise in the moral domain, and only if the agent is non-morally motivated to send a signal. Even when these conditions are satisfied, the harm of (3) or (4) can be fairly minor and forgivable, especially if the action that the person undertakes is a good one. It’s presumably better to do more good even if we are, to some small degree, motivated by the personal signaling value that doing more good affords. But let’s accept that each of (1) – (4) is, at the very least, morally suboptimal to some degree and that we can be justified in pointing this out when we see it. The question then is: how do we identify instances of (1) to (4), and how bad they are?

In order to claim that an agent is engaging in unconscious deceptive signaling, we need to have some evidence that she doesn’t actually have the property to the degree indicated. In order to claim that she is engaging in conscious deceptive signaling, we need to have some evidence that she also knows that this is the case. And in order to claim that an agent has self-interested motives, we have to have some evidence that she is being consciously or unconsciously motivated by the personal signaling value of her actions, and not by their moral consequences (with signal value being mostly a side-effect).

I think that it’s important to note that criticisms of people for signaling must have one of these components. It’s too easy to claim that someone is “just signaling” where the implication is that they are doing so wrongly and for the person in question to feel that they have to defend the claim “I am not signaling” rather than having to defend the claim “I am not being deceptive nor being unduly motivated by personal signaling value”.

The key problem we face is that whether or not an agent is signaling inaccurately, and whether or not she is being unduly motivated by self interest will often be underdetermined by the evidence. Suppose that you see someone tweet “I hope things get better in Syria.” If you claim that this person is merely ‘virtue signaling’, then you presumably mean that (i) they are consciously or unconsciously trying to make themselves appear more caring than they actually are, or (ii) they consciously or unconsciously sent this message because of the personal value it had for them rather than out of genuine care (or both). But we can’t really infer this from their tweet alone. The person might actually be as caring as this message indicates (i.e., the signal they send is accurate), and they might be primarily motivated by the signal value only insofar as it is impersonally valuable (i.e., because it normalizes caring about Syria and informs people about the situation). Someone might think that if the agent actually cared about people then they would focus on some different situation where more people are in peril, but the person tweeting about Syria also be focusing other causes, or they might simply not know about how much suffering different situations are causing, or they might not believe in that sort of ethical prioritization.

So what counts as evidence that someone is engaged in a morally egregious form of signaling? In support of (1) or (2), we can have independent evidence that the person lacks the property that they profess to have. For example, if someone claims that systemic social change is the most important intervention for the poor and yet does nothing to bring about systemic social change, we can infer that they are not very motivated to help the poor. Insofar as engaging in discussion about what is the best way to help the poor seems to send the signal that one helps the poor, we can infer that this signal is deceptive. In support of (3) or (4), we can have evidence that the person is unduly motivated by the personal signal value of their action. For example, if someone does the minimum that would be required to make them look good but less than what would be required if they were genuinely motivated to do good, then it seems more likely that they are being motivated by personal signaling value. An example might be a company that makes a token donation to charity in response to a PR disaster. In this kind of case, it seems we have some evidence that the charity is trying to appear good, rather than trying to genuinely correct the harm that led to the PR disaster in the first place.

I think we can take a few useful lessons from all this. The first is that it’s a bad idea to simply accuse people of “signaling” because signaling can mean a lot of things, and not all signaling is bad. The second is that if we are going to make such an accusation, then we must be more precise about whether we are objecting because we think they are sending deceptive signals, or because we think they are being unduly motivated by personal signaling value. The third is that we should be able to say why we think they are consciously or unconsciously being deceptive or unduly motivated by personal signaling value, since a lot of behavior that is consistent with blameworthy signaling is not in fact an instance of blameworthy signaling. The fourth is that we should identify how bad a given instance of signaling is and not overstate our case: if someone is only a little motivated by signaling value, whether consciously or unconsciously, then they have hardly committed a grave moral wrong that undermines the goodness of their actions. None of this nuance is captured if the name of the game is simply to see some apparently virtuous behavior and dismiss it as a mere instance of ‘virtue signaling’.

5 thoughts on “Some Noise on Signaling

  1. Proposed social norm: Where appropriate, explicitly state how much of your motivation comes from signaling value. Signaling value can be selfish (I publicly donate to charity in part so people will like me) or selfless (I publicly donate to charity to normalize the behavior). The former is benign and the latter commendable, provided they’re both acknowledged.

    1. I just discovered that Amanda has a blog! How great is this!

      [As per Michael’s suggestion, I will attempt a postmortem signalling breakdown of the above sentence.
      -Conscious and Honest.
      -Motivated by direct “fairness values” in the form of wanting Amanda to reap some “earned” benefits from creating and disseminating interesting original content: maybe ~30%?
      -Motivated by more direct altruistic considerations, including wanting Amanda to produce more public content for other’s benefit: maybe ~20%?
      -Motivated by selfish, potentially blameworthy values, including “look at me, I read smart people things!” and wanting Amanda and her friends to like me more: maybe ~30%? [There is now a potential for an infinite regress[1] signal here, as the way I phrased that previous sentence was probably partially signalling of this same sort, as is pointing this out, etc.]
      -Motivated by other altruistic type considerations, such as increasing standing/social bonding in the EA community facilitating opportunities for other good work in the future: maybe ~20%?]

      I am not sure how serious Michael was being, so apologies in advance if I am trying to analyze something that was meant flippantly or as a joke. But as the above attempt sort of demonstrates, I think that such a social norm is unlikely to either provide much accurate information or to satisfy the motivations of the (implicit?) accuser. It is also somewhat emotionally difficult, effort-intensive, and definitely has a “one thought too many”[2] feel to it.

      As for accuracy, even while trying sincerely to parse out the above, my motivations felt really opaque to me. I think the thought that sparked the idea of writing the comment was quickly put up for some sort of system one “vote” and got a “yay” response. Trying to figure out why feels a bit like when a court tries to determine “congress’s intentions” when passing a law. There were probably a lot of things that went into it, some contradictory, many based on identity protection, associations, or conditioning, and I am not sure my position “inside my mind” has *especially* privileged insight. Even if I have the transcript of the congressional hearing, I am not really sure I am getting at the underlying causes of the vote going how it did. My error bars around these estimates are huge.

      Additionally, some of the accuracy in reporting my motivations is going to be further obscured by it being difficult to split between instrumental and terminal goals. This makes it hard to figure out the ultimate motivation without fully understanding my full goal portfolio. (And my ultimate goal portfolio is likely undetermined as is.) I want people, and especially certain people, to like me, think I am smart, friendly, honest, etc. but probably partially for instrumental reasons, because it makes it easier for me to pursue various other terminal values. Both directly by having people “help” and indirectly by ensuring I am not too unhappy to function. It is plausibly the case that I want Amanda to think I am friendly so she likes me so that I can benefit by having some of her future decisions directed “towards” me and my goals in some “social capital”[3] sense. I am also plausibly targeting Amanda and her blog in particular because she is brilliant and would be exceptionally well placed to do EA x-risk reduction and crucial consideration-hunting work, which is a terminal value for me, and I think maybe I can influence her a tiny bit on the margin to do more of it, or do more of certain types I suspect are especially valuable. (If I wanted my Civil War reenactment group to be as authentic as possible, I would try to buddy up with the most hardcore guys, so it could be *so awesome*. Selfish? Depends how you mean the word.) Also, because Amanda is so good at this stuff, being “associated” with her provides me further social capital, which I can then use to further push towards doing the most good. The instrumental goal is “gain social capital” but I don’t anticipate spending that on hunting mammoths or raising a barn, but on making the world look more like how I want it to look, and last how long I want it to last. This is why I am not trying to gain social capital among survivalists or Mennonites (whose blogs aren’t as good anyway), but among EAs. I have a sense that EAs are the people who can accomplish what I want accomplished.

      Some amount of wanting to be liked is just a social instinct, and some of it is hedonistically[4] motivated by opportunities to enjoy people’s company, but a lot of it is about building up capital. This is the case with a huge amount of signalling. In some sense then, the question of the altruistic content then bottoms out in what I intend to do with the capital. I think even a hypothetical utility maximizing human saint would do an enormous amount of instrumentally “selfish” actions in order to gain the capacity to do the most good. I am obviously not that, but a portion of my motivation is the same.

      Additionally, I think the issue that the social norm would probably try to address is some sense of phoniness or defensive jealousy. I think that someone giving “one thought too many” does not address phoniness so much as it deepens the sense of unease. Writing this out has left me doubting myself tremendously! I also don’t think it would help with the defensive jealousy, since it is the same as the person making another public sacrifice to do more good still. (“Yep, so humble and honest, what a great guy.”)

      Finally, I think if this became a practice, people would feel averse to doing things that could be perceived as signalling, since it comes with paying this tax, which might not only reduce the amount of good in the world, but also reduce the amount of honestly conveyed information. Writing out this comment started out fun, but by now I am feeling both much more unsure in myself, and increasingly uncomfortable about sharing this. It also took a lot of time and effort to reflect on all of this honestly. More so when you are also trying to question how honest the reflection itself is. Having to try to parse the content of the motivation in order to signal would be like having to define a term every time you use one you are not 100% sure you know especially well or could define.[5] I think you would eventually just stop trying to use “reach” words, which is probably not the outcome.

      Okay, no cheating, I will only look these up after:

      1. Infinite regress: This is a hard one. Something that feeds onto itself in some sort of “downward manner” a bit like a fractal that repeats in smaller versions as a pattern. Or when you point a camera at a monitor taking the feed from the camera. I can point out that I am signalling something as a signal, and do that as a signal, and do that as a signal, etc.
      2. “One thought too many”: The idea that identifying another underlying reason for something one level deeper creates some sort of problem, and maybe potentially undermines the original version. This could be a bit like doing something for the “wrong reasons.” For example, if the point of this comment was to signal intelligence or seem likeable, interesting, and like someone worth spending more social time with, I should have avoided sprawling my thoughts, more than one too many, across so many areas I know so little about. [Previous sentence signalling rating: Conscious honest. Mostly signalling genuine humility inspired by a lot of self-doubt the deeper I got into this. Partially “selfishly” trying to preempt looking *too* foolish if I made a mistake by posting this. Some of not looking like too much of an idiot is also altruistically motivated, however, since I intend to do altruistic things with a large portion of my social capital.]
      3. Social capital: The sort-of economic benefits you get from social standing and relationships. This includes things like trust facilitating beneficial interactions, willingness to trade, help with moving house, advice or information transfer, introductions to potential employers, borrowing a tool, etc.
      4. Hedonic: I now regret using this word, especially with this audience. Something like pleasure or enjoyment, but in a sort of a biochemical sense. This is often in contrast to a “deeper” “narrative” type of utility. For example, getting a massage would provide hedonistic utility, graduating at the top of your class narrative utility. I would guess the feeling you get when you reach the top of a mountain is hedonistic utility, because of the biochemical rush of joy and accomplishment, but that the moment-to-moment experience of climbing a mountain would be more like a “narrative” type. So if someone says “I like climbing mountains” they probably mean “like” in the sense of narrative utility mostly. If they say, “I love the rush that comes from reaching the top of a mountain” they are probably referring to hedonic utility.
      5. This was another exercise that started out fun, bet then left me quite unsure of myself. I am unlikely to repeat it. Which I think sort of backs up its strength as an analogy?

  2. Suppose I believe B to be true and I think you believe B+a. I signal B-a with the intention of getting you to arrive at B. Look at this nice Syrian refugee family in our community in which the parents care for their children and have jobs and contribute to the community. Deceptive signaling? Morally blameworthy?

  3. I agree that a lot of behaviour which is consistent with blameworthy signalling is not in fact blameworthy signalling.

    If it’s sufficiently hard to distinguish these, it *might* be good (from a consequentialist perspective) to have social norms which punish such behaviour. This would pay a cost of punishing some unblameworthy behaviour in order to discourage it on net. Whether this is in fact a good idea will depend on how much harm the blameworthy behaviour causes, how much it is discouraged, and how much damage is done by punishing unblameworthy behaviour.

    I tend to feel a little uncomfortable about shaming people even for blameworthy actions, and quite uncomfortable shaming them for unblameworthy actions. So I am sympathetic to your conclusion. But I’m not sure it’s such an open-and-shut case.

    [I was going to try to follow the above suggestion of stating how much of my motivation in writing this comment came from signalling value, but I’m finding it very hard to assess. It wasn’t conscious until I examined motivation. I think there are elements of selfish and selfless signalling, and they are somewhere 5% and 50% of my total motivation. My motivation for this comment at the end was about 75% signalling, mostly selfish.]

    1. I think you’re right. My thinking was that we certainly don’t think it’s okay to blame people for blameless actions, and my worry was that a lot of people who claim others are signaling are doing just that: they treat signaling as if it’s something blameworthy but then everyone isn’t too sure about what the truth conditions for “blameworthy signaling” are (which I tried to investigate) and so they fall back on the truth conditions for “signaling”, which are also somewhat vague amd much broader. But perhaps the fact that I felt the need to write such a long preamble to what signaling might be and what blameworthy signaling might be is some evidence that it’s not that useful to accuse people of signaling in all but the most serious and harmful cases: there are going to be too many false positives otherwise, and it doesn’t seem worth creating a culture of calling out behavior that is in most cases not very blameworthy. [35% signaling]

Leave a Reply

Your email address will not be published. Required fields are marked *