"AI-first" is the new Return To Office

The latest fad amongst tech CEOs is no longer "founder mode", or taking drugs that they would fire you for taking, or telling everybody to return to the office — it's demanding that all work be AI-first! This is a great idea if you think nobody at your company is great at what they do. It may otherwise be a suboptimal strategy. Let's dive in!

Let's use me as a case study. I'm pretty okay at writing. For example, one time I wrote a fairly technical analysis of Twitter's platform strategy that inspired Will.I.Am of the Black Eyed Peas to start Twitter beef with me two years later when he read the post and took offense to my referring to him as "nobody’s favorite rapper".

This is something your GPTs cannot do, I assure you. An average LLM won't even know that Drake's favorite MIME type is application/pdf. Chalk one up for the greatness of human creativity.

The AI-First Mind Virus

Shopify's CEO Tobi Lütke (personal motto: "what if a Canadian was all the worst things about the United States?") started the "AI-first" trend, with one of those big memos that included, amongst other things, the declaration that "We will add Al usage questions to our performance and peer review questionnaire." This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? I'm actually old enough that I was at different workplaces when they started using spreadsheets and email and the web, and I can tell you, they absolutely didn't have to drive adoption by making people fill out paperwork about how they were definitely using the cool new technology. Isn't that interesting?

Some of the other CEOs talking about the use of AI are a little more reasonable. Duolingo's CEO Luis von Ahn seems to be trying to be somewhat more moderate in his memo, stating plainly that he doesn't see AI replacing his employees. (Though that does immediately raise the "who brought that up?" question...) Yet even in this more even-handed take, we still get the insistence that "Al use will be part of what we evaluate in performance reviews". This is really weird!

The funny thing is, I'm not saying LLMs are without their uses. Let's use me as a case study again. I'm a lousy coder, these days. I haven't had time to keep up my skills, and the area I focused on for most of my dev career (front end web development) changes particularly quickly. So I use some of the modern tools to help me get up to speed and get more done in a limited amount of time, because otherwise I'm woefully unproductive in the short windows I have to code in my free time.

To be explicit: I code on the weekends, not professionally. That means I'm not very good at it. I'm certainly nothing like the incredibly talented developers that I've had the good fortune to work with over the years. I'm just fluent enough to be able to debug the broken code that LLMs generate, or to catch the bugs that they spew out by default. And I'm sure I don't even catch all the bugs that pop up, but fortunately, I'm not making any production systems; I'm just building little toy apps and sites for myself.

This is an important illustration: AI is really good for helping you if you're bad at something, or at least below average. But it's probably not the right tool if you're great at something. So why would these CEOs be saying, almost all using the exact same phrasing, that everyone at their companies should be using these tools? Do they think their employees are all bad at their jobs?

Groupthink and signaling

Big tech CEOs and VCs really love performing for each other. We know they hang out in group chats like high schoolers, preening and sending each other texts, each trying to make sure they're all wearing the latest fashions, whether it's a gold chain or a MAGA hat or just repeating a phrase that they heard from another founder. A key way of showing that they're part of this cohort is to make sure they're having a tantrum and acting out against their workers fairly regularly.

The return to office fad was a big part of this effort, often largely motivated by reacting to the show of worker power in the racial justice activism efforts of 2020. Similarly, being AI-first shows that a company is participating in the AI trend in the "right" way, by imposing it on workers, rather than trusting workers to judge what tools are useful for them to do their jobs.

A more normal policy on AI at a company might be something like this:

Our IT department has evaluated a set of LLM tools and determined that these ones meet our requirements for security, performance, data governance, reliability, manageability and integration with our workflows. We'll be doing a controlled deployment of these tools and you can choose to use them if you think they'll help you with your work; please share your feedback on whether they are helpful, and what might make them more useful for you over time. Here are the ways these AI tools meet our corporate standards for compliance with intellectual property consent, sustainability and environmental goals, and accessibility.

This would not get you invited to the fascist VC group chat, tho!

AI-Second? Third?

How did we get here? What can we do? Maybe it starts by trying to just... be normal about technology.

There's an orthodoxy in tech tycoon circles that's increasingly referred to, ironically, as "tech optimism". I say "ironically", because there's nothing optimistic about it. The culture is one of deep insecurity, reacting defensively, or even lashing out aggressively, when faced with any critical conversation about new technology. That tendency is paired with a desperate and facile cheerleading of startups, ignoring the often equally interesting technologies stories that come from academia, or from mature industries, or from noncommercial and open source communities that don't get tons of media coverage, but quietly push forward innovating without the fame and fortune. By contrast, those of us who actually are optimistic about technology (usually because we either create it, or are in communities with those who do) are just happily moving forward, not worrying when people point out the bugs that we all ought to be fixing together.

We don't actually have to follow along with the narratives that tech tycoons make up for each other. We choose the tools that we use, based on the utility that they have for us. It's strange to have to say it, but... there are people picking up and adopting AI tools on their own, because they find them useful. This is true, despite the fact that there is so goddamn much AI hype out there, with snake oil salesman pushing their bullshit religion of magical thinking machines and overpromising that these AI tools can do tasks that they're simply not capable of performing. It's telling that the creators of so many of the AI tools don't even have enough confidence in their offerings to simply let users choose to adopt them, and are instead forcing them into users' faces in every possible corner of their apps and websites.

The strangest part is, the AI pushers don't have to lie about what AI can do! If, as they say, AI tools are going to get better quickly, then let them do so and trust that smart people will pick them up and use them. If you think your workers and colleagues are too stupid to recognize good tools that will help them do their jobs better, then... you are a bad leader and should step down. Because you've created a broken culture.

But I don't think the audience for these memos is really the people who work at these companies. I think the audience is the other CEOs and investors and VCs in the industry, just as it was for the other fads of the last few years. And I expect that AI will indeed be part of how we evaluate performance in the future, but mostly in that the way CEOs communicate to their teams about technologies like AI will be part of how we all evaluate their performance as leaders.