Stochastic Fools

The poltergeist in the (social science) machine

In my last post I wrote about why I thought the election would be determined by people whose decision-making processes are inscrutable and probably as close to true randomness as a human being can get. Today I’m going to have more fun and work out that concept, which I’ve been thinking about in some form for a long time now, in a little more detail. I use the term Stochastic Fools, and I think they (or, rather, the lack of awareness of them) are responsible for lots of different things inside the social sciences writ large, including some of the problems of polling, replicability, and conflicting findings. Bold claims, I know, but stick with me for a second.

I found Stochastic Fools in roughly the same way scientists detect black holes: by looking at an apparent blank space that’s having effects on stuff we can see. Roughly speaking, I’m hypothesizing the existence of a couple of big categories of people, broken down by how they make decisions, and arguing that a (theorized) group has some characteristics that can explain some different things we observe. My working definition: a Stochastic Fool is a person who engages in a specific type of action repeatedly, but without developing a consistent or rationally-chosen methodology for that action. As young children lack object permanence, we might say that Stochastic Fools lack idea permanence in particular areas. Because of the lack of consistent methodology and apportionment of means to ends, which reasons-for-action they respond to are unpredictable.

Let me give a simple example. Generally, we assume that if we do a certain task repeatedly, we will improve over time. Maybe this is a skill or task at a new job, maybe it’s learning to operate a new tool or software program. The first time we try to do something like this, we may flail around a bit, trying different things to see what works best to deliver the results we want. We may not find the most efficient or optimal technique, but eventually we expect to find something that works reliably well, at the least. But imagine if we didn’t do that: every time we tried to use the software, we clicked through buttons and menus in a more or less haphazard way, never really understanding the function of the program or how to use it. In this case, our output would be, from a certain perspective, pretty close to random. We are probably more likely to get bad outcomes, but because we’re mostly just poking around (rather than consistently making one kind of error) the specifics of those bad outcomes will be pretty unpredictable. Sometimes we might get a good outcome, but when we do so it would be based on luck, not skill. 

In the example above, we would be acting as what I call a Stochastic Fool. A fool, because we aren’t really learning from our mistakes and we aren’t acting in a way that rationally apportions means to ends. Stochastic, because the kind of foolishness we display isn’t the result of a single clear (and therefore reliable and predictable) error but rather a generalized ineptitude. To stretch the idiom a bit, we aren’t repeatedly stepping on the same rake as much as we are bumbling around in the dark, stepping on a bunch of different rakes of varying lengths and hitting ourselves in new ways each time. 

Compare this with two other archetypes: the rational actor and the predictably irrational actor. The rational actor has a clear, and more or less functioning, connection between means and ends. They desire X, they believe (correctly) that Y will get them X with some degree of reliability, and so they do Y in order to get X. The predictably irrational actor, by contrast, has a clear and consistent but nonfunctioning connection between means and ends. They desire X, they believe (falsely) that Y will reliably get them X, and so they keep doing Y but not getting X. 

About rational actors nothing more needs to be said here. One may be tempted to doubt the existence of the predictably irrational actor; however, I offer two examples to assure you. Think first of protestors who engage in kinds of protest actions that make them at best a nuisance and at worst a real obstacle in achieving their own ends. Their actions are predictable in the sense that they have a firm commitment to using certain tactics, but they are irrational in the sense that those tactics don’t work very well to achieve their stated ends. Think next of people who attempt to apply a correct kind of means, but are very bad at it: perhaps a person wants to form reliable beliefs by having a virtuous response to epistemic authority, but they are very bad at identifying legitimate experts, and so end up following the opinions of the kind of people who present themselves as serious and legitimate but are, in truth, neither. One can be predictably irrational either by consistently applying unsuitable means, or by consistently misapplying otherwise suitable means. 

Ok, so we’ve got these three proposed groups of people. It would be irresponsible for me not to offer some caveats here. The main one is that I think people can and do move between the groups, either in different categories of action (someone can be a rational actor in their job and predictably irrational in political life, etc), or at different points in their life. I certainly do not mean to suggest that some people are a kind of permanent intellectual elite, with a corresponding underclass of fools. If anything, I suspect it’s the opposite: getting stuff right consistently is profoundly hard, and any of us will probably only develop a few areas of true competency.. Furthermore, these groups don’t have strict dividing lines. They’re conceptual tools, not scientific taxonomies. But I do think that lots of cases will tend to fall more or less easily into one or another of the categories.

But because this is a short piece and not one of the books I want to write at some point in the future, I’ll consider this short sketch of the Stochastic Fool to be sufficient for now. This obviously isn’t a full and rigorous empirical argument, it’s a hypothesis that I think might be able to help us grapple with a few phenomena. 

For instance:

  1. Why Biden’s poll numbers aren’t changing. My last piece about polling response to the election is holding up nicely so far, with polling ranging from unchanged to very minor dips for Biden - certainly not a knockout blow. (Note again that this is not an argument either for replacing Biden or for keeping him on the ticket, although I think it is an argument that that decision should probably be made on other grounds.) Read this (from Northeastern University). And this. And this. Notice, from the first one, how many voters who were intending to vote for Biden stayed with Biden. Those are your predictable people, and they’ve picked a side.

  2. What’s going on with polling overall? It seems to be somewhat sketchy, and quite possibly getting worse, the last few election cycles. At the very least, lots of predictions based on polls haven’t been great. My theory offers at least a partial explanation. In times of low party polarization, there are a greater number of legitimate, rational, responsive-to-evidence reasons to be an undecided or swing voter. Those reasons can be studied, understood, and responded to by campaigns. As polarization increases, those reasons drop away. This isn’t “which economic policy will serve my family best,” it’s “do I agree with an authoritarian roadmap or not.” Less wiggle room there! Therefore, whenever pollsters try to zoom in on undecided or swing voters, they’re getting a much higher number of Stochastic Fools. Those people’s responses will be far more erratic across time as they respond to various new things without having a clear, fixed cognitive approach. How, when, and in which direction they’ll respond is going to be debilitatingly hard to measure. It’s not so much that polling is broken, it’s that the fundamental structure of the political landscape has left pollsters looking at the most vacillating group there is. It’s like wrestling with Proteus. If my theory is correct, I think awareness of Stochastic Fools might be pretty important in developing a better understanding and use of polling under at least some conditions.

  3. Conflicting/nonreplicating findings in the social sciences. I’m at the most risk of getting too far out over my skis here, because I’m most definitely not a psychologist. But there are some puzzles about seemingly conflicting findings, and there is a replication crisis, within social science writ large. In my M.A. program I read a lot of literature on disinformation and media policy, and I’d read papers that disagreed about the extent to which people were in ideologically filtered information bubbles and how that did (or did not) affect their behaviors and beliefs. I suspect that Stochastic Fools play a similar role here as in political polling: if you’re drawing from a truly representative general sample they’ll create ‘noise’ by being unpredictable and erratic – but probably not a lot of it. If, however, you try to sample groups where they’re overrepresented, they’ll create a lot more noise and you’ll find it harder to replicate your results, just because they’re less likely to approach your test/survey/experiment in the same way twice. They don’t have one overriding intellectual virtue or vice, they’ve got loads of competing vices. 

I think this is quite a promising theory, but for now it’s just a theory. If I’m right, it might be beneficial to try to devise a test for stochastically foolish behavior to see if we can identify when people are acting in this way. Then, if we can devise a way to identify them, we can begin to see how many previously unobserved effects their presence has had on a variety of things, from psychological studies to political polls. I’m not a proper social scientist though, so I’m not the right person to do that. But if you are one, or know one, I’d be happy to consult on a study for a modest fee, payable in beers. :)

Reply

or to participate.