theinexactsciences.github.io

1.7

Suspended,

Many thanks for this reply! I think we are generally in strong agreement, so I’ll just address a few small points.

First, on the Bayesian brain and heuristics, I think we only differ on what counts as a heuristic. I certainly agree that our brains work (in some sense) by having models of the world and updating those with new information as it comes in, which is certainly Bayesian in spirit. We can quite safely assume for theoretical reasons that doing exact inference is not within the realm of possibility, so there must be some kind of approximation happening. I guess my perspective here is that the fact that there are certain ways in which we can reliably get confused suggests that failures of approximation are not due to some sort of insufficiently precise approximation (e.g., a la Monte Carlo inference, where would expect failures to be more random), but rather due to the use of some sort of more heuristic mechanism. I can’t really get any more precise than that, as I don’t actually know what I’m talking about, but I just wanted to clarify that I think you took me to mean a greater level of simplicity than actually intended. My intended meaning was something more like approximate inference with some reliable failure modes.

Now I do agree that many of these failure modes are rather artificial. However, it is interesting to think through various ways in which we might have been fooled, even in the ancestral environment. I think there are actually numerous types of visual and auditory illusions we might have been subject to, and at least sometimes confused by, including things like the classic mirage in the desert. One could also potentially categorize the fluctuations in our own thinking as a kind of distortion (whether due to situational cues, various stimulants, or other), i.e., whether we think we can take on a lion in a given moment or not. And of course there is clear evidence of a predisposition to assume a kind of animacy or intentionality in the causes of things, and a corresponding failure to imagine a more systemic explanation. We could quibble about whether to blame this on heuristics or not, but it does seem to me like there is a predisposition (whether through prior or inference mechanism) to end up believing in the presence of some sort of God.

That being said, I do admit that it is foolhardy to assume we know what is in the mind of the deer, and leaning too hard on consciousness or intentionality can get us into trouble. I often feel like a major difficulty in communication is to sustain the utility of certain terms when they are at risk of collapse under an analysis at a lower level. Yes it is possible to read everything we do as writing, but this seems to me to wade into the waters of having writing become synonymous with the forward flow of time, i.e., the movement of atoms is itself just a kind of writing; how can we find any basis upon which to distinguish certain types of events in time from others? Whether it is “real” or not, intentionality still feels like it can be a useful frame for thinking about these things, even if it is only something we build into our models of others.

One way of trying to make a distinction could perhaps be based on the extent to which we consciously run forward simulations of possible outcomes which might result from the perceived choices. All of this is still nevertheless just us acting, and yet there certainly feels to be something special about circumstances in which we try to imagine the likely outcomes of our actions, and even briefly inhabit those mental worlds before acting, as distinct from situations in which we respond without consciously thinking about it. I often think this is clearest in cases where we are trying to decide among a fixed set of choices, especially when the evidence is insufficient for it to be obvious. It’s fascinating to me how easily a kind of Kierkegaardian moment of undecidability can enter into our daily lives (such as, personally speaking, when I try to decide where to go for brunch).

Something similar happens with “manipulation”. I agree that it is hard to draw a clean distinction between something like nefarious persuasion and influence more generally, but if we let “manipulation” expand into just meaning “influence”, then I expect we would simply end up inventing another term to signal something closer to the nefarious end of the spectrum. At a minimum, there does seem to be a useful distinction between, say, telling someone what you are trying to do to influence them, versus deceiving or denying that you are doing what you are doing or that you are doing anything at all.

Regardless, I love your point that even connoisseurs love to be manipulated, it’s just that they want it done well, not using hackneyed techniques. It’s fascinating the degree to which we desire certain types of mental states, and in some ways work against ourselves in seeking them out, by gradually increasing the refinement of our tastes. There is something delightful about experiencing new ideas when in a state of ignorance that I fear gets lost as we begin to know more. Perhaps what becomes most difficult is sustaining a base level of wonder and capacity to be awed. How shall we seek out the most stimulating intellectual experiences once we have ensconced ourselves in what feels like a relatively coherent world view?

PM