Today's AI is unreasonable.

There's an extraordinary amount of hype around "AI" right now, perhaps even greater than in past cycles, where we've seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn't matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another "Web3"-style froth bubble because they've again abandoned one of the core values of actual technology-based advancement: reason.

I don't say this lightly, I say this with purpose. Amongst engineers, coders, technical architects, and product designers, one of the most important traits that a system can have is that one can reason about that system in a consistent and predictable way. Even "garbage in, garbage out" is an articulation of this principle — a system should be predictable enough in its operation that we can then rely on it when building other systems upon it.

This core concept of a system being reason-able is pervasive in the intellectual architecture of true technologies. Postel's Law ("Be liberal in what you accept, and conservative in what you send.") depends on reasonable-ness. The famous IETF keywords list, which offers a specific technical definition for terms like "MUST", "MUST NOT", "SHOULD", and "SHOULD NOT", assumes that a system will behave in a reasonable and predictable way, and the entire internet runs on specifications that sit on top of that assumption.

The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging.

Into that world, let's introduce bullshit. Today's highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn't keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts. You can put garbage in and get... bullshit out, but the same quality bullshit that you get from non-garbage inputs! And enthusiasts are current mistaking the fact that the bullshit is consistently wrapped in the same envelope as meaning that the bullshit inside is consistent, laundering the unreasonable-ness into appearing reasonable.

Now we have billions of dollars being invested into technologies where it is impossible to make falsifiable assertions. A system that you cannot debug through a logical, socratic process is a vulnerability that exploitative tech tycoons will use to do what they always do, undermine the vulnerable.

So, what can we do? A simple thing for technologists, or those who work with them, to do is to make a simple demand: we need systems we can reason about. A system where we can provide the same input multiple times, and the response will change in minor or major ways, for unknown and unknowable reasons, and yet we're expected to rebuild entire other industries or ecosystems around it, is merely a tool for manipulation.

Narcissists and abusers use the inconsistent and capricious changing of responses as a way of controlling and manipulating their victims. They are unreasonable because it is an effective way to keep the vulnerable in a place where they constantly have to respond, or where they have to live in a constant state of fear and anticipation about how they will be expected to react. Technologies are created by people, and systems reflect the values of their creators.

We should react to unreasonableness in purported technologies in the same way we react to intentional unreasonableness in people in positions of power: set firm boundaries, be ready to walk away, don't debate, demand consistent and reasonable behavior.