I think you’ve given us an excellent place to start. You’ve hit on a lot of the issues I think we’ll be discussing over the longer term, and I won’t deign to try to cover all them. Indeed, much of my delay is in trying to push too much into a single response, a classic mistake of someone suspicious of the meanings of things, I find.
I’d like to begin with your questions, rephrased here as:
However, it seems to me there is a question -1 and a question 0 that must be addressed before diving straight in.
Question -1 is deceptively simple: What if I had lied about the questions you asked above? Let us close-in on the answer from the absurd. If I had suggested that you had asked me “Do colorless green ideas sleep furiously?” you would not even have been able to interpret the sentence and would have rejected the notion that this could possibly be even a bad paraphrase—I must have lied. Yet, if I had instead suggested that you previously asked me “Who does the driver think of as the author of the signal represented by the traffic light?” you might have seen me as taking a large interpretive leap, but would have perhaps kept yourself from berating me, in order to move the conversation forward. Part, if not most, of my reply is showing you that I understood what you said. This is of necessity: I must point at the common grounding points so we know on which axes we can rotate our perspectives in order to keep them interlocked. The points I do not ground in response to you, we will do away with or explore as this conversation finds need.
Although much of what is said has more to do with how conversation flows than what it means, I would argue these are entwined far more deeply than we allow ourselves to countenance in mosts discussions about communication. This is largely due to lack of vocabulary. We have very little means for saying why certain things were implied, so we simply say we felt them to be, but these (currently) intangible qualities of a conversation lend different epistemic statuses to implications.
Conversation has a life of its own. We expect certain structures out of conversations, even if the number of possible conversations is massive: more than the number of atoms in the universe, as the common benchmark for combinatorial comparisons goes. I am trying very hard to understand your intended meaning, but I would argue that a more primary goal of mine (and likely of yours too) is to create a public_structure, one we believe others can _use. We wish not to descend into jargon, or relate personal details we can ground our interpretation in but that will be useless to a “general” reader we are inferring. We will stop ourselves from acting as if we understand things too well, if they cannot be clearly interpreted and attempt to stop ourselves from producing such utterances. In doing so, we create a space of inference we would be comfortable making, and if I were suddenly to rely on a key personal detail only you would understood, you would look for the “public meaning” of what I was trying to say first. Thus, we can sculpt the interpretive tools of our conversant through our framings. But of course we know that: Why do I choose to send some messages to co-workers on email vs. Facebook?
In this conversation it is clear that we are trying to get something done via this public interpretive status. And it is always this “something” that overtakes understanding, because there are inevitably distinctions that will be lost, but when pointed-out we would shrug at and say: “Well, that wasn’t really the point, anyway.” Having a point isn’t merely a matter of interpretive magnitude, it is a matter of how the vector of one part of interpretation projects onto the vector of the direction we share in this conversation, which might reduce its magnitude to almost nothing.
It is clear that people often are not able to explicitly define what they are getting done: What are you “getting done” when you’re having fun with friends? Developing bonds, giving your brain those delicious pro-social chemicals, etc. but those aren’t things we’re very good at reasoning about as goals, in comparison with arguing that you should get a raise. This inability to explicitly define goals, has made various people suspicious of whether human behavior is fundamentally goal driven, but I would argue that’s the wrong thing to become suspicious of here. We should think of things economically: people are always trading resources. Time, affection, and interpretive depth are all resources. We should not ask whether people are optimizing these things according to some strategy. Rather, the only really defensible position is to assume that people are optimizing these things as well as they can and if we don’t see how we either (a) have misdefined what that optimization objective is or (b) are blind to some bottleneck that makes an apparent optimization impossible. This is the case against most common economic “irrationalities” that people like to joke about, e.g. the marshmallow test. People are not stupid. But they have not been optimized for lab conditions, and they have no reason or even ability to change their entire optimization landscape to what you claim is your laboratory setting, a thing scientists are notoriously bad and disincentivized from doing properly the larger the claim they’d like to make is.
This is a very general principle. Anything can be viewed as optimizing a mathematically accurate description of itself. If we can’t frame people that way, it is simply for a lack of the right conceptual vocabulary or data. My assumption, going forward, is that people have been trained to act in their best interests, and that communication is really just one facet of that.
This brings us to something you were wondering at the end of your letter: Why is understanding one’s boss so much more similar to Knapp & Michael’s notion of meaning than so many of the examples that we use on a daily basis? It is because in the act of understanding your boss, your ability to describe the attempted optimization of people for happiness coincides with people trying to predict another person’s perspective. Whatever someone chooses to do, it is usually useful to know what the boss would like and what they expect to happen. If they are to obey the plan set forth by the boss, they must understand it or else fail out of negligence. If they are to disobey the order, it is useful to understand the narrative the boss expects to proceed until the moment of disobeying in order to prepare correctly for it and to understand what the fallout will look like. In other situations, it is simply not as clear how much one “should” try to get out the intended meaning.
If you’re having a dinner party with Marsha, and she keeps trying to tell you her idea for reforming the education system and you’re just not getting it, wouldn’t it be wise to laugh, smile, and say “Oh, indeed!” In fact, it might be very useful for you to do so even if you do believe you could understand what is being said.
If you are ordering food at a restaurant and the waiter appears displeased, but is trying to hide it, can you use this information in any meaningfully useful way? You could be a a bit nicer, but deeper interpretive depth is unlikely to help you.
When you notice a student in your class always uses examples involving Sonic the Hedgehog, is it worth investigating? If you have time, you might discover the new hit meme all the kids are abusing, but if you don’t you’re unlikely to reap the benefit.
If you notice a pattern in your stock ticker, is it worth it to research it? You are unlikely to have the competency, information, and time to beat the market, so no.
If your partner keeps sighing whenever you bring up the upcoming trip you’re going on, is learning more a worthwhile investment. Usually yes—because they have influence over your longterm happiness.
We investigate what’s worth investigating, and that has a lot to do with our own power and the power of the person who is communicating over us. That’s why, despite Game Theory’s hilariously sparse reductionism, it is a useful backbone that I think we will keep coming back to.
The answer is none, but it’s a matter of framing. The hard workers in the above are “understanding” and “engaging”.
It would be easy to pick-apart “intended meaning” and “communications” and say that the edges of these ideas fray, but this would be the classic rationalist mistake: attack the most clearly delineated ideas that fray at the edges, because it is their partial clarity that makes them open to attack. I think they are useful fictions and I think we can work with them. “Understanding” and “engaging” will not make such pleasant bedfellows.
When do we try to understand someone’s communications? I think a very clear case we can point to is immediately after they have said something in conversation. But surely I’m not the only one who, in their fervor to respond to a previous point slip into single-mindedly searching for a segue into the thought that I would like to share? This presents a case of me limiting the depth of the interpretation, obviously, but it also presents a secondary task I’m pursuing: looking for associations in the opposite party’s speech that allow me to chain an idea I already have in mind. This doesn’t seem to be “intended meaning”.
The trick here, is the same as ever: I have a goal in my mind, and I’m using the world to achieve the goal, in this case someone else’s words. This is not a special case, this is the norm. The reason it feels like a special case isn’t because I’m not trying with all my might to understand you, but because my goal is so obvious to myself and I am so single-minded that I am in danger of exposing my goals. In general, it is not pro-social to be so obvious about goals, nor is it strategic to have only one. We, as a social species with complex communication, have evolved (both genetically and culturally) to support the simultaneous pursuit of a number of goals. Many of these goals are not encoded in our minds as thoughts unless they become deeply problematic: the goal of finding a community we can be happy in is only present in one’s mind when one is truly isolated, like many of us are under lockdown in 2020. But when one is mildly comfortable in a community, we still seek to find the strongest network of bonds within that community we can.
A number of these goals, in fact, are basically just emergent properties that humans have learned to rely on, and aren’t the same for every generation. Look at what COVID-19 has done to people mentally: complaints of tenseness, mood swings, inability to concentrate, etc. abound. It is rarely my goal to “see someone in real life today”, but it sure as hell is something I would like to optimize for, and I now optimize for audio and video contact as a substitute.
Understanding is muddied by goals, because we want to prioritize, and that prioritization makes us reason about other people’s communications in ways that they didn’t necessarily intend. By itself, this might be complex, but not painfully complex. The pain comes in from the fact that people are aware other people are optimizing for their own goals and can be misled. They use expected behavior both to consider what not to say (to stop you from going down a path they don’t care about, e.g. what don’t you mention when talking to your parents, even as an adult?) and to get you to go down certain routes. For instance, people are drawn to gossip. I see many people use this to describe a situation where they want someone else’s help problem solving, as a juicy piece of information they will share, then lead into the question of what they should do. They are not lying, and their intention is to present things as gossip, but I would argue that this significantly confuses the notion of intention that is usually tacitly assumed.
What about “engaging” in communication? Well, I think that’s what we’ve already been talking about, right? It’s my belief that engaging in communication is the definition of what it means to be trying to understand things and that the “understanding” part is just how to talk about the planning stage most directed at considering the other person’s actions.
Consider the example you gave of tracking an animal. As you note, many would suggest this is not a communication at all. But with a slave, cleverly escaping their bondage by faking the signs they had been there, they are certainly engaging in a transfer of information. In order to do so, they need to think about what the other person is saying through their actions: that the escapee’s personhood will not be respected, and that the signs of passage will be treated like a clever animal that has been set loose. This example is perfectly situated the liminal space between communication and pure game-theoretic planning, because there’s mutual understanding to a very limited extent but not a mutually accepted relationship. So information is communicated as if it were planning, but still requires real understanding of the other party that can go a number of layers into recursive theory of mind.
Take just one step more towards communication, e.g. the escapee defending their right to freedom in court against the slaver, and you will see people purposefully misunderstanding each other as a matter of public interpretation. The “understanding” is just a part of the engaging, and the engaging is just a very vague way of talking about being involved with someone else in almost any kind of relationship.
This isn’t to criticize your phrasing, which I think is basically the way I would have said it too, but to say that if we break it up by the vocabulary we mislead ourselves. A better way to see things, in my view would be to:
I appear to have only really dug into your first question, though I think it was a necessary detour to get some grounding with questions -1 & 0. The rest of your questions need getting to, but I’m only going to burden you more with a few final thoughts.
This letter was, essentially, a lot of complaining, and I’m unhappy with that. I like your letter better—you dig into examples we need to think about. Which makes me think—what are examples where we feel people are clearly communicating? I think it’s harder than it seems, at first, to put our fingers on the “communicating part”. We’re sure that two friends gossiping in the hallways of a school are communicating, but when we describe them it’s hard to say where the communication is. They’re making jokes, gossiping, passing little bits of judgement, bragging, comparing scores, looking at other people, making plans and yet it’s only little snippets of this that can be directly described as “information transfer” the way communication is usually awkwardly framed. We’ve got to dig-in, and it might be good to really dig-in in narrow instances. Driving is an interesting one, because the capacity of the information channel is so small, but drivers do clearly try (and often fail) to send signals to each other. What about fictional scenarios? It might be helpful to world-build from the bottom-up, because we can draw from real rituals without the fear of getting things wrong, only showing examples of how things play out.
The other thing I’d like to do more of, is start playing around with useful tools. You’re right—our vocabulary actively impedes our understanding. Instead of trying to name the ultimate objects of our desire, let’s name the things we think we’re more sure of. I’ll start:
Pigeon Projection (n) — when a person is confronted with something in a space where they already have a strong categorization system, they will tend to out confusing objects into an old bucket in the categorization system rather than making a new one. Comes from “pigeonhole principle” in discrete mathematics. Could be useful in understanding the practicalities of how interpretation becomes bounded.
We believe people work a certain way, but we tend to get wrapped-up in top-down description. I doubt we have the words for that kind of map yet, but I want to know what kind of symbols to expect on the map to be able to think about what it should look like. We need nodes and edges to play around with on this still nebulous graph.