OpenClaw Ruined AI and It Makes Me Happy

The biggest AI story of 2026 isn’t the growing need for electrical power or the ridiculous way the market sold out for RAM based on a letter of intent to acquire. No, the biggest AI story of the year so far is how a scrappy little project completely upset the AI apple cart. OpenClaw (nee ClaudeBot, nee OpenMolt) set the world on fire. And it destroyed how people were trying to direct AI. I’m sitting over here giggling about it.

Round The Clock

The basics of OpenClaw are simple enough. You have a system of agents that do things. It can read your texts or email and triage the flow of information. It can send you a text summary of the news or the weather every morning. But it can also be configured to monitor things as they arrive to deal with them on the fly. That’s where the real narrative shift has happened.

When you open a browser window to talk to an LLM you are creating a session that has a finite time limit. You are saying that you are going to work on a project for a specific period of time and that’s that. Once you complete your task and you go back to whatever you were doing that’s the end of the conversation. More importantly, that’s the end of the token consumption. Because these models use tokens as method for creating words or code you can think of them as the resource that AI lives and dies by. Not unlike vespene gas from StarCraft.

When you have an agent that’s running constantly, it acts like a real person. It doesn’t consume resources in orderly sessions. It is bursty. It might be idle for hours and suddenly consume thousands of tokens when something comes in that requires a complex chain of tasks. It’s not unlike when someone gets a sudden burst of creativity and spends the next week racing to complete their task before the spark is gone. Now imagine that whole process is burning tokens as the agents dispatch their work.

The idea might not give you pause but it scares the hell out of the AI companies. Because unpredictable consumption destroys their carefully planning projections. Those ideas of data centers being built to deal with demand next year get wrecked when the projected demand shows up now and is distributed unevenly. Finance markets hate unpredictability. And those same finance markets are the ones backing this AI boom.

The Fringe Fails Us All

The other side effect of OpenClaw is the way that companies are racing to do more. Before it was just getting OpenAI to write your term papers or edit an email for tone. Now people are democratizing coding and writing their own apps to handle tasks they thought could never be automated or turned into software. Users have never felt more free.

Providers, on the other hand, are scrambling. Those same carefully curated token consumption projections go right out the window when users start burning more and more tokens because they feel empowered to build more. As more entry level users find out that Claude Code and Codex can help them build a workout app or a recipe tracker they are increasing the token burn rate. That might be something that could be managed by increasing capacity slowly. But they aren’t the real problem here. It’s the power users.

When power users figured out how to unlock parallel development pipelines and dispatch agents to write code blocks they significantly increased their token consumption. This wasn’t helped by the industry’s attempt to shift the discussion around tokens to reward those using AI the most by putting up leaderboards to highlight people burning through the most tokens. This created a culture of “tokenmaxxing”, which has to be the dumbest name I can think of which naturally means it stuck. The idea of rewarding people on a specific metric means that people will game that metric to look good. Tokens went from being burned at a steady rate to being consumed like fuel for some magical fire that cannot be quenched.

Providers panicked. Suddenly the people paying $20/month for Claude Code were burning thousands of dollars worth of tokens building apps with all the tokens they could find. People were thrilled by the creativity but the people running the GPU farms on the back end were worried. If the power users are burning through tokens this fast, what happens when the rest of the world decides to do the same? That’s when you saw the pushback. People were hitting walls of token utilization in hours and told to come back tomorrow. Users were also hitting limits on monthly allocations. The providers were working feverishly to upgrade the hardware as quickly as possible. Eventually, we hit the conclusion that everyone wanted to avoid but knew was inevitable: usage-based pricing. Not surprising considering anyone that has ever tried to offer anything unlimited quickly had to implement limits because people really can’t help themselves. Now we’re facing rumors of coding being moved to the hundreds-of-dollars per month tiers because it’s just too taxing for the infrastructure we have currently built out. Who could have possibly foreseen telling everyone that AI is the way of the future and everyone needs to embrace it could cause a shortage of resources?


Tom’s Take

It turns out that not being ready for the beast you have unleashed is exactly what the AI companies deserve. They got caught flat footed when the always-on agentic system that was a natural outgrowth of their ambitions forced them to look at how their infrastructure is being used and consumed and the numbers didn’t add up. They wanted people invested in the idea of using AI to build everything and then hopefully, like Uber and AirBnB, they could raise the rates and retire to some island. Instead they realized that people are voracious when it comes to exploiting technology for their own gains and now they are in a race to catch up because the little lobster made them look silly. Don’t mind me. I’m just going to be over here laughing while the tech geniuses of the world get exposed by shellfish.