theinexactsciences.github.io

Laying The Foundation for a Theory of Stacked Language Games: Strategic Self-Representation of Futures

i. Intelligence is fundamentally strategic

ii. A difference that makes a difference

iii. “All communication is manipulation,” explained

iv. Communication is bargaining

v. The plausibility structure of self-representations

Intelligence is fundamentally strategic

  1. There are many definitions for intelligence, but most revolve around the ability to adaptively apply knowledge—that is, to learn about one’s environment, and put that learning to use. Intelligence always has an end; for most organisms, this end is bringing about a preferred world—steering away from one undesirable future into another. We can reductively call this “futures optimization”; perhaps more completely, it has been called “computationally-frugal cross-domain future-steering.”1
  2. As many thinkers have pointed out, the modern human environment is fundamentally social: many of the moving pieces and relevant factors in a given optimization problem are or stem from other people.2 (Other dynamic, future-optimizing agents.) This is also true to some lesser extent in the animal kingdom: many of the threats (e.g. predators) and resources (e.g. prey) of the natural environment are dynamic, future-optimizing agents. (Flora, climates, and geology on the other hand are all relatively static, allowing one-way, rather than infinitely regressive two-way, modeling.)
  3. In other words, our environment—and therefore our intelligence, which is a model of our environment3—is fundamentally strategic. Schelling supplies our definition: If a strategy game is any situation in which each player’s best choice of action depends on the actions (he expects) the other player will take (and vice-versa, reflexively), then strategy refers to methods of optimizing outcomes within such a game. It is the study of conflicting parties’ behaviors as they are premised on “the interdependence of the adversaries’ decisions and on their expectations about each others’ behaviors.”
  4. Rock, Paper, Scissors is a simple game of strategy. “I played rock last round, so they’re likely to believe I won’t play rock again, and won’t themselves play paper.” Mutual modeling enters a problematic infinite regress in theory; in practice, it gets overly heady once you hit my model of your model of my model. This type of thinking is sometimes called “level-k thinking,”4 and is an integral part of games like poker or cultural practices like fiction.5 It is the premise of the infamous Princess Bride poisoned cup scene, and is implicit in a Keynesian Beauty Contest6 or “Guess ⅔ the average” game. In this frame, a level-0 player acts irrespective of his predictions of others’ actions; a level-one player models his opponent as a (naive) level-zero player; a level-two player models his opponents as being level-one, which is to say, his expectations are set by the expectations of a level-zero player. And so on.

A difference that makes a difference

  1. Intriguingly, Chris Fields and Michael Levin in a recent paper (2020) suggest that meaningfulness is less a property of language and more a property of cognition; it helps guide attention, focus, and informational priority. What is meaningful is meaningful in light of the past and in service of the future. It is an organism’s environment filtered through its memory and leveraged toward its goals.
  2. «Bateson famously defined a “unit” of information as a “difference which makes a difference.” As Roederer points out, information so defined is actionable or pragmatic; it “makes a difference” for what an organism can do. It is, therefore, information that is meaningful to the organism in a context that requires or affords an action, consistent with sensory-motor meaning being the most fundamental component of language as broadly construed. It is in this fundamental sense that meaning is “enactive.”»7 What makes a difference toward an organism’s possible and optimal moves—his agency and strategy—is what makes a difference to his future.
  3. Compare Murray Davis’s concept of “interestingness”: the interesting is that which surprises, which denies—contradicts or otherwise updates—the cognitive schema of the perceiving entity. This is more or less a rephrasing of Bateson’s definition of “information”—information which is redundant is not, to the receiving intelligence, information. And it is mirrored by findings in neuroscience that it is specifically the amount of visual surprisal—how unexpected or unpredicted an area of a visual field is, given the perceiver’s existing understanding—which directs the attention of our gaze. We are drawn, unconsciously, to that which is interesting—because that which is interesting is that which we can learn from, that which our existent model of the world failed to predict.
  4. Here is Friston describing his predictive cognition hypothesis as it works hierarchically: “One can regard ascending prediction errors as broadcasting newsworthy information that has yet to be explained by descending predictions.” (Emphasis added.)

“All communication is manipulation,” explained

  1. If the environment in which one’s future will occur is a product, in large part, of other actors, then an actor X will naturally gravitate toward strategies which manipulate those actors’ behaviors toward states which are (expected to be) optimal for X. If other actors are, similarly, premising their own behaviors on their anticipations and understandings of X, then a crucial part of strategy becomes about self-representing, to other strategic actors, a future state which is most likely to cause them to act in the way most advantageous to you.
  2. This is wordy and complex-seeming in the abstract, but is revealed as intuitive and obvious when we supply an example: A manager’s decisions strongly affect the future of an employee. A manager is making decisions based (ostensibly) on what is best for himself, which to some extent is proxied for via what is best for the company. He will make these decisions by future-modeling what he believes to be an employee’s future behavior. Thus, if an employee has been habitually late, it is in his interest to self-represent in such a way that the manager’s model of the employee’s future behaviors is one that is synchronous with his (the manager’s) own self-interest. For instance, a good strategy may be to attribute past lateness to a broken car which has now been fixed, or to an old address on the opposite side of town where he no longer lives. He is self-representing his own future in such a way that the manager perceives his own optimal future as keeping the employee on in his current position.
  3. This can be meaningfully called “manipulation,” in the sense of “All communication is manipulation; some manipulation is mutually advantageous.”8 That is, optimal speech is uttered because it improves the situation of the utterer (otherwise, it would be best left un-uttered). The vast majority of speech is perceived, pre-fact and by the utterer, to improve (rather than worsen) the situation of the speaker, just as the vast majority of actions are performed because the actor believes it will improve their situation (even if only indirectly, by advancing a cause or interest they care about).
  4. For instance, I may notify my walking companion that I’m feeling hungry, in order to hopefully (and without explicitly asking or bargaining) steer our future toward a meal. Even if I tell you something “so that you know it,” or signal something “so that you may think it,” the altering of your mental state is, ostensibly, in the service of some future action. If I communicate my sense of charity by relaying a recent deed, it is perhaps to strengthen our alliance in ways that cash out in reality, in the future—that is, beyond the realm of mere thought and opinion. (This, again, is the enactment theory of meaning discussed in a previous section.)
  5. We can understand “All communication is manipulation” in the light of another conceptual framework—signaling theory. Whether factored by economics or evolutionary theory, the outline, for our purposes, is the same: Signals are behaviors whose primary purpose is to influence other, responsive organisms’ behaviors in turn. That is, a signal by definition is dispatched for the benefit of the signaler. A bird’s tail-feathers protruding from bushes, in a way (he does not realize is) visible from outside, is not a signal to passing animals, but an unintentional or unavoidable cue. Something—the presence of a bird—is “evidenced” by the tailfeathers’ protrusion—but it is not an act of communication by the bird, it is not an intentional attempt to manipulate organisms’ behavior. Such cues are always to the detriment of the cueing organism, because they provide information to other organisms, who will use it (e.g. an owl using the rustle in the bushes to help track their prey, or a rival crow noticing where the first bird’s stash is hidden—the better to steal from). Intentional communication here is signaling, should we stick to strict definitions: behavior intended to modify other organisms’ behaviors.
  6. More elegantly, “communication facilitates long-term changes in generative models,”9 and generative models facilitate long-term changes in behavior.
  7. If we start with the premise that communication is fundamentally a strategy for manipulating other organisms, and that language is a strategy for manipulating other people, the concept of stacked language games becomes implicit in how language is practiced. That is, players of unflattering language games should be expected to publicly portray these games as more flattering and noble than they are. Players with especially antisocial or uncooperative strategies should be expected to cloak these in prosocial or cooperative strategies. All players will engage in self-advancing strategies, but they will vary in the bounds and constraints they put on their behavior, as well as in the extent of their secondary interests (such as promoting the welfare of another person or a superorganism simultaneous with their own self-promotion).
  8. I have called the inevitable situation in which people are judged by their optics, and accordingly optimize for their optics, the “optikratic” character of society. Since self-representation and communication are one and the same, we can consider stacked language games a subset of the larger game of appearances (“opticsmization”).

Communication as bargaining

  1. Let’s pull back, briefly, to Schelling’s factoring of bargaining and “mixed-motive” games. This will give us a better picture of the kinds of situations within which human actors optimize their futures and manipulate one another.
  2. Classic game theory focuses on games that are purely zero-sum or positive-sum. In a zero- or “fixed” sum game, each player’s earnings comes at the cost of their competitor’s; there is a fixed amount in the pot, and thus, the players’ interests are in direct opposition. In a purely coordinative positive-sum game, it is not the total sum but the proportion (“split”) between players that is fixed—and if the players successfully work together, they achieve the mutually and individually optimal outcome of increasing both pots (at fixed rates). Slicing up a pie is roughly zero-sum; making pies is roughly positive-sum.
  3. “Mixed-motive” is Schelling’s term to describe games which are neither fixed-sum (pure conflict) or fixed-proportion (pure coordination). As he tells it, the vast majority of human games—indeed, even all conflict, short of wars of annihilation—are mixed-motive.
  4. «To characterize the maneuvers and actions of limited war as a bargaining process is to emphasize that, in addition to the divergence of interest over the variables in dispute, there is a powerful common interest in each. A “successful” employees’ strike is not one that destroys the employer financially.»10
  5. Bargaining typically occurs either on the basis of mutual improvements over a status quo—as in marketplace haggling for goods, where both buyer and seller wish to come to a sale—or mutual improvement over some hypothetical future state, as in the case of boycotts, strikes, and extortion. In either case, agreement is preferable to stalemate, even as the specific terms by which agreement is reached are highly contested. In a mixed-motive game, stalemate means “both sides lose.”
  6. We can return to the example of, on a walk, my telling my companion that I am hungry. My companion may not himself be hungry, but may agree to stop somewhere (perhaps—finding a compromise—at a to-go eatery, where he will be less inconvenienced) in order to avoid spoiling the evening, should my mood worsen through hunger. Or perhaps we both wish to eat (together) this evening, but have different preferences on when and where. That is, we both wish to come to some agreement, so that we might eat together—we have an interest in common—but we also have conflicting interests, and there is a give-and-take required so that both parties can agree on a dinner plan. By alerting you to my hunger, I have publicized my preferences and thereby indicated what kinds of dinner plans I would be amenable to (imminent ones). What appears, naively, on its face to be the neutral disclosure of factual information—“I am hungry”—is anything but.
  7. While some bargaining is explicit, most is tacit. “Adversaries watch and interpret each others’ behavior, each aware that his own actions are being interpreted and anticipated, each acting with a view to the expectations that he creates.”11

The plausibility structure of self-representations

  1. Imagine John and Lou are walking down the sidewalk in opposite directions, approaching each other on a narrow path. They will run into each other soon if they keep walking forward. One strategy is to “perform obliviousness,” as Ken Liberman calls it.12 You act as if you don’t know or care I’m there, make yourself into an immovable fact of nature (looking at your phone is a contemporary tactic to this end). If Lou is performing obliviousness, then John will be able to reliably, confidently predict person A’s path forward, and steer clear of him. But should they get into more recursive modeling, e.g. John looks at Lou’s path and attempts to move around it, simultaneous with Lou looking at John’s path and attempting to move around it, then they will end up in the infamous side-to-side shuffle of sidewalks everywhere. You can be ten yards apart, going back and forth horizontally, getting in each others’ way. It’s in both actors’ interest to get out of the other’s way, but it’s also in each of your interest to not move, or to move the least, be it out of laziness (i.e. minimized energy expenditure) or status mongering.
  2. I want to suggest that this process of self-legibilizing (in cases of pure coordination) or self-representational (in cases of conflict) is foundational to all social life, precisely because it allows positive-sum coordination, or seizes other selfish advantages to the self-representing actor. The way we dress, the way we behave, the patterns of our conversations all assert this. We should think of manipulation as distorting someone’s priors so they’ll act a certain way. One could also distort an actor’s values, but values are harder to budge than priors in general. Values often exist independent of facts and are usually ingrained. It’s much easier to say exploit someone’s already-existent belief in justice, while fudging the details of who is responsible for a crime, than it is to undermine their concept of justice.
  3. Problems emerge because, in many cases, humans can (and do) represent one way while acting another way. There’s nothing inherently binding about language, as “Parable of the Dagger” points out. But you can rig up situations, in mixed-motive games, where a self-representation is more or less plausible, and can be trusted enough to allow coordination.
  4. Imagine a game of “chicken,” like in the old 50s James Dean movies. Say you’re driving towards another car, betting on who’s going to turn away first. If you throw your steering wheel out the window or spray paint over your windshield in advance, there is no way to steer or react in time, respectively. You’ve self-bound to a course of action. Your opponent no longer has a choice; if they don’t turn away, the two cars will crash. You know that your opponent is a rational agent, or at least rational enough to want to live. So you’ve won. Your opponent has to get out of the way, because they know for a fact that you won’t. It’s better for them to lose face then run into you and die. You’ve established a single Schelling point, a single outcome the system will coordinate to if no further communication or changes can take place.
  5. This is an example of self-binding by making certain behaviors physically impossible, but more often self-binding occurs by making certain behaviors incredibly costly, e.g. a company staking its reputation on a charitable pledge, where the bad press from not donating as promised would far exceed the actual cost of donation. Indeed, “self-binding” can be seen as a variant of costly signalling—costly signalling cast into the future: It will cost me greatly if I decide to deviate from the advertised course of action . This costliness makes it trustworthy.
  6. Another example of self-representing futures comes in deterrence theory. By self-representing a certain course of action (e.g. massive retaliation) premised on another party’s course of action (e.g. a nuclear strike), we influence that party’s understanding of what will result from their own actions—thereby changing their actions.
  7. Another example from Schelling: «If a man knocks at a door and says that he will stab himself on the porch unless given $10, he is more likely to get the $10 if his eyes are bloodshot. Similarly, a state that has destroyed cities with nuclear weaponry in the past is more credible in threatening massive retaliation.»13
  8. This is similar to the example of chicken earlier examined. It is a natural product of the k-level regress of mutual modeling. We have already established that, if my future is premised on others’ actors actions, and they are basing their own actions on their anticipation of my future actions, then it is in my interest to self-represent my future course of action in such a way as brings about actions in others which is in my own best interest—to manipulate. Because self-representations can be strategically deployed, this way, observing parties do not trust just any representation, but rather, those that are in some way costly, self-bound, staked, or otherwise dependable.
  9. Similarly, a self-representation that is in the representing party’s best, rational interest will typically be believed by default—e.g., If you throw your steering wheel out the window, I will veer to avoid crashing—whereas claims to future behavior seemingly not in the representing party’s self-interest are liable to be disbelieved.
  10. The third trust is established is through slow, gradually increasing reciprocal exchanges. This historical precedent, and the relationship it builds, allows otherwise risky transactions to take place in the future: it opens a possibility space of alliance and aid. I may only trust my neighbor to watch my children if we have already reciprocally invited one another to dinner, presented housewarming and holiday gifts, loaned out our cars, etc.
  11. Hopefully these solutions in game theory help point to ways stacked language games (SLGs) can be “collapsed,” from a double set of books to a single set. Future letters will hopefully attempt to describe the particulars of SLGs from the psychological and strategic foundation built here.

Notes

  1. “Optimization,” LessWrong 

  2. See also Friston’s “A Duet for One”: «…our sensations are largely generated by other agents like ourselves. This means, we are trying to infer how our sensations are caused by others, while they are trying to infer our behaviour: for example, in the dialogue between two speakers. We suggest that the infinite regress induced by modelling another agent—who is modelling you—can be finessed if you both possess the same model. In other words, the sensations caused by others and oneself are generated by the same process.» 

  3. cf. the Good Regulator Theorem in cybernetics 

  4. See Cognitive Hierarchy Theory 

  5. Anticipating audience response is a crucial part of artistic subversion, a maneuver I argued is fundamental to modernist and avant practices in Rutten & Reason’s 2018 “A Predictive Hermeneutic.” 

  6. A beauty contest in which judges compete to pick the entrant which is most selected-for by other judges. 

  7. Fields and Levin, “How Do Living Systems Create Meaning?” 

  8. Reason, “Economics Thinking” 2020. 

  9. Friston & Frith, “Active inference, communication and hermeneutics.” 

  10. Schelling, Strategies of Conflict

  11. Schelling. 

  12. “The Local Orderliness of Crossing Kincaid.” 

  13. Schelling.