Classical Liberalism and AI

Posted on 22 January 2026 by Jedidjah de Vries 4 min

Classical liberalism loves appealing to reason, to idealized rationalism. A quirky consequences of this is that they generally think it’s fine to be motivated by idiosyncratic religious beliefs, as long as in the public sphere (classical liberalism also loves “the public sphere”) you present publicly accesible arguments. The idea is it doesn’t really matter what your private motivations are for supporting a particular position or policy because there is a single Truth that we can all arrive at through commonly held Logic, and the “marketplace of ideas” will sort it out.

I think this is a quirky consequence because, well, let’s say my holy book says “X is good”. I can’t go out and say “we should do X because my holy book says to” so instead I assemble research and “well reasoned” arguments to support X. I don’t it because I believe or even care about the validity of those arguments, but because that’s the proper way to convince others. Now lets say a prophet suddenly shows up and says “God has spoken to me and X is no longer good. Now Y is good.” So what do I do? I assemble new research and arguments and start advocating for Y. I never cared about the validity of the original arguments, remember, so this doesn’t bother me at all. But I think it would bother everyone else.

Classical liberalism thinks this is totally fine. There’s no problem here because (1) there is a singular “reason” that can determine truth and (2) in the public sphere’s marketplace of ideas we all apply that “reason” and so collectively can sift good arguments from bad. I think both of those assumptions are questionable, and the problem of the flip-flopping prophet points to this. If someone shows up arguing for X one day and Y the next you would call them out for being intellectually dishonest at best, if not just a troll.

Critical theory (feminist, post-colonial, etc.) responds to this by highlighting the importance of our relationship to, and the context of, beliefs and knowledge. This sometimes gets unfairly flattened into and mocked as “only p-people can talk about p issues” or off hand dismissals non-western logics and epistemes. But really it’s just the idea that your motivation, lived reality, and understanding of the world matter to how we should evaluate what you say. On the one hand it allows more voices to present themselves—because that singular reason/truth thing was always bullshit. And, equally important, on the other hand it forces the bros to stop hiding behind cosplaying Greek philosophers and be honest about why they advocating for the shittiest of all possible worlds.

AI is the ultimate religious zealot. It doesn’t have any context. It doesn’t have any motivation or beliefs of its own. It just spews something plausible in response to the last prompt it received. Part of what I think is insulting and disrespectful about being asked to take AI-generated content seriously is not that they “cheated” by not “doing the work”. That anti-elitist argument is a red herring. It’s the intellectually dishonesty of being asked to take seriously arguments and content that the person making them doesn’t necessarily believe or remotely care about themselves. And it’s probably not a coincidence that the type of people boosting AI are the bro Libertarians invested in upholding themselves as a sort of neo-Enlightment elite against “woke” alternatives.

And lastly, slightly but not entirely tangentially, this lack of sincerity is also what the fascists rely on when "arguing" in the Liberal public sphere. They weaponize it. Fascism holds that might, not reason, makes right. But they still go around offering "reasons"—without believing or caring about them, and quickly discarding them when they become inconvenient—for Liberals to waste their time contending with.