What is the Discourse Machine?

Creating an epistemic environment

This is an introduction to what I’ve chosen to call the ‘Discourse Machine,’ both the name of this blog and a combination of factors that help shape the epistemic environment in which we develop our political beliefs, reason together, and act politically. In this post I’m going to explain what I mean by an epistemic environment and what I mean by ‘discourse machine,’ and then I’ll give a short example of the kind of argument the concept of a discourse machine lets us draw out. I’ll try to keep it short, but this is basically the conceptual framework for a lot of what I plan to write.

Epistemic Environments

Why talk about an epistemic environment? While it’s true that we reason as individuals, we are also always embedded within social contexts. At the most basic level we are exposed to evidence, argument, and truth-claims from other people, and it’s almost inconceivable for us to pursue knowledge in total isolation. But there are other dimensions to the social aspect of our knowledge: sometimes our epistemic actions are influenced by non-epistemic factors, biases or incentives which are linked to our social lives. Epistemic environments can refer to other factors too - a loud concert hall or a room full of people speaking a language you don’t understand would be a poor environment for reliably receiving some kinds of information, from people trying to talk to you for instance.

Talking about epistemic environments allows us to direct our attention to how our reasoning can go wrong in ways that aren’t really, or at least totally, our fault. Epistemology often starts with a focus on individual factors such as personal epistemic processes, faculties, and virtues. That’s important if we want to reason well. But it’s also important to understand that virtuous and reliable reasoners can still fail to reach the truth without a favorable environment. It doesn’t matter how good your eyesight is or how careful and attentive you are if the room you’re in just doesn’t have enough light.

The other benefit of talking about epistemic environments is that it can be a helpful metaphor for the complexity and interrelatedness of our social epistemic lives. Most of us learn about biological environments in school, and we get a general sense of how small changes can have cascading effects. Environments can be healthy and flourishing, or they can become polluted or damaged; organisms within an environment can adapt to changes. We’ve learned how to change our physical environments to make them better for human life. It just might be the case that we need to think about our epistemic environments in the same way.

The Discourse Machine

Philosophers and other people who write about big social phenomena often try to think about their topics in either mechanistic or organic terms. Partly this is unavoidable, because analogies are really useful. When used well, they can helpfully highlight particular aspects of whatever it is we’re looking at. When I talked about us as embedded within an epistemic environment, I was trying to draw attention to the ways in which our epistemic lives are complex, highly intersubjective and relational, but also influenced by things that aren’t just the direct causal actions of other agents. I could have just said that, but most people find that the analogy to a physical environment is an easy and intuitive way to visualize the idea.

It’s worth pointing out, though, that analogies like this aren’t perfect. If you rely too heavily on a single analogy, you might end up overemphasizing the aspects you originally wanted to talk about at the cost of paying insufficient attention to things your analogy doesn’t capture. For instance, it can be useful to speak of organizations or movements evolving, but it’s also important to keep in mind that biological organisms do not choose an evolutionary strategy - but the people who make up organizations do. But if organic analogies can obscure the role of choice and reasons for action, mechanistic analogies can easily overemphasize the efficacy of certain human choices. You can’t perfectly control society by pulling the right levers and twisting the right knobs. Institutions and incentives don’t always work as intended.

I’m choosing to use a mechanistic analogy - a discourse ‘machine’ - to talk about how our current epistemic environment has been created, and can be shaped. But I want to do so carefully, so that what I intend to draw our attention to things I think are important doesn’t end up obscuring other things. Roughly speaking, I’m going to talk about things like tech policy, financial incentives, and psychological tendencies as inputs that help explain bad outputs, like the spread of conspiracy theories, political beliefs that are highly resistant to evidence, and polarization. I’ll provide a short example of what this kind of reasoning looks like in a moment, but I need to make one more quick point first.

Because I want to avoid the shortcomings of mechanistic analogies, I want to draw your attention to one major way in which human behavior isn’t like a machine: a properly working machine is generally expected to always function in a certain way, while the most you can expect from ‘pulling levers’ on human behavior is a probabilistic outcome. A quick analogy: suppose I walk down the street and offer everyone I meet $100 in exchange for shaking my hand. You could describe what I’m doing in a mechanistic way: I’ve devised a way to turn hundred dollar bills into handshakes, by offering a common and strong incentive for a relatively easy action. This would probably work on so many people that we might be tempted to think of this in a predictable, law-like, mechanistic fashion. But we shouldn’t lose sight of the reality that there are actually lots of possible reasons why someone might decline the handshake. For instance, they might assume it was a prank, or be so rich that $100 does not matter to them. Thus, even this simple case is probabilistic in ways that actual machines are not.

Test-running the machine.

Let’s use something I call the ‘Field of Dreams’ hypothesis to see the Discourse Machine in action. I’ll come back to this in future articles, to show my work on this particular argument and provide actual evidence for each of the parts. But for now I only want to show you the form of the argument, so you can see why I think the mechanistic analogy helps us understand the world and even think about ways to shape the epistemic environment. So I’ll just provide the argument as assertions.

  1. There is some set of people who are especially epistemically vulnerable and/or psychologically attracted to conspiratorial thinking.

  2. People tend to seek out and favor information that confirms their prior beliefs.

  3. The financial incentives of modern media and social media are mostly driven by attention.

  4. This creates an exploitable space for conspiracy-based political content

  5. That exploitable space will be attractive to people who are purely grifting and also provide oxygen to ‘true believers.’

  6. The success of conspiracy-oriented ‘content creators’ will push that content to greater and greater numbers of people, providing more financial benefit, until something like ‘saturation’ is reached — the number of reachable and persuadable people have been reached and persuaded.

Hence the Field of Dreams — “if you build it, they will come.” As a matter of probability, the existence of people who want, or can be easily persuaded to want, Alex Jones-style content will summon Alex Jones from the ether. If the real Alex Jones had a come-to-Jesus moment, a new Alex Jones would probably arise. We want to be cautious of making this overly deterministic, but if the form of the argument works, then we’ve potentially revealed something that’s both true and useful by using the input-output machine analogy. If we want to avoid living in an epistemic environment with major sources of epistemic pollution, we might need to focus our attention on altering one of the conditions in points 1-3 above, or introducing new relevant conditions to change the outputs. That’s easier said than done, of course, but I think it’s how we have to start thinking about things.

Reply

or to participate.