AI Is Just A Majordomo

The IT world is on fire right now with solutions to every major problem we’ve ever had. Wouldn’t you know it that the solution appears to be something that people are very intent on selling to you? Where have I heard that before? You wouldn’t know it looking at the landscape of IT right now but AI has iterated more times than you can think over the last couple of years. While people are still carrying on about LLMs and writing homework essays the market has moved on to agentic solutions that act like employees doing things all over the place.

The result is people are more excited about the potential for AI than ever. Well, that is if you’re someone that has problems that need to be solved. If you’re someone doing something creative, like making art or music or poetry you’re worried about what AI is going to do to your profession. That divide is what I’ve been thinking about for a while. I don’t think it should come as a shock to anyone but I’ve figured out why AI is hot for every executive out there.

AI appeals to people that have someone doing work for them.

The Creative Process

I like writing. I enjoy coming up with fun synonyms and turns of phrase and understanding a topic while I create something around it. Sure, the process of typing the words out gets tedious. Finding the time to do it even more so, especially this year. I wouldn’t trade writing for anything because it helps me express thoughts in a way that I couldn’t before.

I know that I love writing because whenever I try to teach an AI agent to write like me I find the process painful. The instruction list is three pages long. You feed the algorithm a bunch of your posts and tell it to come up with an outline of how you write. What comes out the other side sounds approximately like you but misses a lot of the points. I think my favorite one was when I had an AI analyze one of my posts and it said I did a good job but needed to leave off my Tom’s Take at the end. When I went back to create an outline for training an AI to write like me the outline included leaving a summary at the end. Who knew?

People love the creative process. Whether it’s painting or woodworking or making music creative people want to feel like they’ve accomplished something. They want to see the process unfold. The magic happens on the journey from beginning to end. Feel free to insert your favorite cliche about the journey here. A thing worth doing is worth taking your time to do it.

Domo Arigato, Majordomo

You know who doesn’t love that process? Results-oriented people. You know the ones. The people that care more about the report being on time than the content. The people that need an executive summary at the beginning because they can’t be bothered to read the whole thing. The kind of people that flew the Concorde back in the day because they needed to be in New York with a minimum of delay. You’re probably already picturing these people in your head with suits and wide tie knots and a need to ensure the board sees things their way.

Executives, managers, and the like love AI. Because it replicates their workflow perfectly. They don’t create. They have others create. They don’t want to type or write or draw. They want to see the results and leverage them for other things. The report is there if you want to read it but they just need the summary so they can figure out what to do with it. Does it matter whether they’re asking a knowledge worker or an AI agent to create something?

The other characteristic of those people, especially as you go up the organizational chart, is their inability to discern bad information. They work from the assumption that everything presented in the report is accurate. The people that were doing it for them before were almost always accurate. Why wouldn’t the fancy new software be just as accurate? Of course, if the knowledge worker gave bad data to the executive they could be fired or disciplined for it. If the AI lies to the CEO what are they going to do? Put it in time out? The LLM or agent doesn’t even know what time out is.

People that have other people do things for them love AI. They want the rest of us to embrace it too because then we all have things doing work for us and that means they can realign their companies for maximum profit and productivity. The reliance on these systems creates opportunities for problems. I used the term majordomo in the title for a good reason. The kinds of people that have a majordomo (or butler) are exactly the kinds of people that salivate about AI. It’s always available, never wants to be complimented or paid, and probably gives the right information most of the time. Even if it doesn’t, who is going to know? Just ask another AI if it’s true.


Tom’s Take

The dependence on these systems means that we’re forgetting how to be creative. We don’t know how to build because something is building for us. Who is going to come up with the next novel file open command in Python or creative metaphor if we just rely on LLMs to do it for us now? We need to break away from the idea that someone needs to do things for us and embrace the idea of doing them. We learn the process better. We have better knowledge. And the more of them we do the more we realize what actually needs to be done. The background noise of AI agents doing meaningless tasks doesn’t make them go away. They just get taken care of by the artificial majordomos.

Don’t Let AI Make You Circuit City

I have a little confession. Sometimes I like to go into Best Buy and just listen. I pretend to be shopping or modem bearings or a left handed torque wrench. What I’m really doing is hearing how people sell computers. I remember when 8x CD burners were all the rage. I recall picking one particular machine because it had an integrated Sound Blaster card. Today, I just marvel at how the associates rattle off a long string of impressive sounding nonsense that consumers will either buy hook, line, and sinker or refute based on some Youtube reviewer recommendation. Every once in a while, though, I hear someone that actually does understand the lingo and it is wonderful. They listen and understand the challenges and don’t sell a $3,000 gaming computer to a grandmother just to play Candy Crush and look up grandkid photos on Facebook.

The Experience Matters

What does that story have to do with the title of this post? Well, dear young readers, you may not remember the time when Best Buy Blue was locked in mortal competition with Circuit City Red. In a time before Amazon was ascendant you had to pick between the two giants of big box tech retail. You may remember that Circuit City went out of business in 2009 thanks to the economic conditions of the time, but the real downfall of the company happened years earlier.

One of the things that set Circuit City apart from everyone else was their sales staff. They earn commissions based on helping customers. That meant they had to know their stuff to keep making money. And the very best of them could make a LOT of money. It also contributed a lot to the performance of the stores. The very best of the best were making a dent in the profit margins of the stores. What should management do about that?

If you guessed something sane and positive, you’d be wrong. In 2003, they eliminated their commissioned sales staff and fired nearly 4,000 of the best. You can just imagine what happened next. Sales plummeted. The associates left behind weren’t the top performers. They struggled to hit the revenue targets. Management panicked. They tried to rehire the furloughed overachievers at entry-level hourly rates. There was raucous laughter and lots of middle fingers. And five years later Circuit City collapsed into fodder for Youtube historians to analyze.

What doomed Circuit City was not an economy bubble popping. It wasn’t Amazon or the rise of independent influencers. It wasn’t cheap parts or dupes of the best camcorders and tape recorders. It was the hubris to think that the people that had spent their careers learning the ins-and-outs of technology were replaceable but less skill and less cost to the business. Inexperience may sound impressive to those that don’t understand but the knowledgable customer knows the difference. The Circuit City execs learned that lesson the hard way. But our old friend George Santayana has a new generation to teach.

Repeating The Past

How could you possibly decide to fire your best performers and replace them with something cheap that spouts out approximate answers that look correct but are ultimately useless when applied in reality? What kind of CEO would think about that just to shave some numbers off the bottom line in the name of Shareholder Value?

Oh. Yeah.

AI.

LLMs are making advances by leaps and bounds compared to just eighteen months ago. But they are not a replacement for people that understand the actual technology. LLMs don’t learn the way that people learn. They are trained and refined to find better solutions to problems but they don’t “learn”. They just get slightly better over time about not putting adverbs in every sentence they write. People make math mistakes that blow up switches and routers. LLMs eventually learn that the word “double” doesn’t always mean double.

To an executive, LLMs sound impressive. They’re filled with impressive words that mean a lot of nothing. To knowledge workers, LLMs create approximations of words that have no meaning. Nowhere is this more apparent in the fact that we’ve created entire acronyms of things like Retrieval Augmented Generation (RAG) to reduce the likelihood that an LLM will just make something up because it sounds good. If you need someone other than just make something up because it’s what an executive wants to hear I’m way cheaper than any GPU cluster that NVIDIA is shipping today, including power consumption costs.

Shaving Dollars and Sense

Circuit City thought that customers wouldn’t know the difference between Bob the Sales Juggernaut’s expansive knowledge of DVD players and Billy the New Guy’s attempts to sound as impressive as Bob. The kinds of people that will pay top dollar for a plasma TV know the difference. The kinds of people that rely on Bob to tell them what to buy because they’re too busy to shop don’t. Circuit City thought they could save on the bottom line by removing experience and they removed the entire bottom line, along with everything above it too.

The rumblings are already there in the market. Entry level tasks will be handled by AI so we can focus on higher-order thinking. The real value is going to be in the way that experts solve problems. At least until we figure out the next thing after LLMs which can try to approximate the thinking of those experts. Then we start the cycle all over again. And those experts? Do you know where they got the knowledge? But doing the meaningless entry-level tasks until they mastered them. They learned as they worked. They tweaked their own algorithms without the need for fifty new GPUs.


Tom’s Take

Thinking you can replace experience with cheap substitutes leads to disaster every single time. “Good enough” isn’t good enough when people know enough about the subject to understand they’re hearing garbage. In fact, I’d argue that AI might be good enough to do the one thing that Circuit City didn’t figure out for years. If your executive team is so great at making poor decisions that they could be replaced by a soulless software program, maybe they should be replaced instead. You might still go out of business eventually but the reduced salaries at the top might keep the lights on a little longer. Who knows? Maybe AI could learn a thing or two that way.

AI Should Be Concise

One of the things that I’ve noticed about the rise of AI is that everything feels so wordy now. I’m sure it’s a byproduct of the popularity of ChatGPT and other LLMs that are designed for language. You’ve likely seen it too on websites that have paragraphs of text that feel unnecessary. Maybe you’re looking for an answer to a specific question. You could be trying to find a recipe or even a code block for a problem. What you find is a wall of text that feels pieced together by someone that doesn’t know how to write.

The Soul of Wit

I feel like the biggest issue with those overly word-filled answers comes down to the way that people feel about unnecessary exposition. AI is built to write things on a topic and fill out word count. Much like a student trying to pad out the page length for a required report, AI doesn’t know when to shut up. It specifically adds words that aren’t really required. I realize that there are modes of AI content creation that value being concise but those are the default.

I use AI quite a bit to summarize long articles, many of which I’m sure were created with AI-assistance in the first place. AI is quite adept at removing the unneeded pieces, likely because it knows where there are inserted in the first place. It took me a while to understand why this bothered me so much. What is it about having a computer spend way too much time explaining answers to you that feels wrong?

Enterprise D Bridge

Then it hit me. It felt wrong because we already have a perfect example of what an intelligence should feel like when it answers you. It comes courtesy of Gene Roddenberry and sounds just like his wife Majel Barrett-Roddenberry. You’ve probably guessed that it’s the Starfleet computer system found on board every Federation starship. If you’ve watched any series since Next Generation you’ve heard the voice of the ship computer executing commands and providing information to the crew members, guests, and even holographic projections.

Why is the Star Trek computer a better example of AI behavior to me? In part because it provides information in the most concise manner possible. When the captain asks a question the answer is produced. No paragraphs necessary. No use of delve or convolutional needed. It produces the requested info promptly. Could you imagine a ship’s computer that drones on for three paragraphs before telling the first officer that the energy pulse is deadly and the shields need to be raised?

Quality Over Quantity

I’m sure you already know someone that thinks they know a lot about a subject and are more than happy to tell you about what they know. Do they tend to answer questions or explain concepts tersely? Or do they add in filler words and try to talk around tricky pieces in order to seem like they have more knowledge than they actually do? Can you tell the difference? I’m willing to be that you can.

That’s why GPT-style LLM content creation feels so soulless. We’re conditioned to appreciate precision. The longer someone goes on about something the more likely we are to either tune out or suspect it’s not an accurate answer. That’s actually a way that interrogators are trained to uncover falsehoods and lies. People stretching the truth are more likely to use more words in their statements.

There’s also more reasoning behind the padding. Think about how many ads are usually running on sites that have this kind of AI-generated content. Is it just a few? Or as many as possible inserted between every possible paragraph. It’s not unlike video sites like Youtube having ads inserted at certain points in the video. If you insert an additional ad in a video that is a minimum of twenty minutes how long do you think the average video is going to be for channels that rely on ad revenue? The actual substance of the content isn’t as important as getting those extra ad clicks.


Tom’s Take

It’s unlikely that my ramblings about ChatGPT is going to change things any time soon. I’d rather have the precision of Star Trek over the hollow content that creates yarns about family life before getting to the actual recipe. Maybe I’m in the minority. But I feel like my audience would prefer getting the results they want and doing away with the unnecessary pieces. Could this blog post have been a lot shorter and just said “Stop being so wordy”? Sure. But it’s long because it was written by a human.

Butchering AI

I once heard a quote that said, “The hardest part of being a butcher is knowing where to cut.” If you’ve ever eaten a cut of meat you know that the difference between a tender steak and a piece of meat that needs hours of tenderizing is just inches apart. Butchers train for years to be able to make the right cuts in the right pieces of meat with speed and precision. There’s even an excellent Medium article about the dying art of butchering.

One thing that struck me in that article is how the art of butchering relates to AI. Yes, I know it’s a bit corny and not an easy segue into a technical topic but that transition is about as subtle as the way AI has come crashing through the door to take over every facet of our lives. It used to be that AI was some sci-fi term we used to describe intelligence emerging in computer systems. Now, AI is optimizing my PC searches and helping with image editing and creation. It’s easy, right?

Except some of those things that AI promises to excel at doing are things that professionals have spent years honing their skills at performing. Take this article announcing the release of the Microsoft CoPilot+ PC. One of the things they are touting as a feature is using neural processing units (NPUs) to allow applications to automatically remove the background from an image in a video clip editor. Sounds cool, right? Have you ever tried to use an image editor to remove or blur the background of an image? I did a few weeks ago and it was a maddening experience. I looked for a number of how-to guides and none of them had good info. In fact, most of the searches just led me to apps that claimed to use some form of AI to remove the background for me. Which isn’t what I wanted.

Practice Makes Perfect

Bruce Lee said, “I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.” His point was that practice of a single thing is what makes a professional stand apart. I may know a lot about history, for example, but I’ll never be as knowledgeable about Byzantine history as someone who has spent their whole career studying it. Humans develop skills via repetition and learning. It’s how our brains are wired. We pick out patterns and we reinforce them.

AI attempts to simulate this pattern recognition and operationalize it. However, the learning process that we have simulated isn’t perfect. AI can “forget” how to do things. Sometimes this is built into the system with something like unstructured learning. Other times it’s a failure of the system inputs, such as a corrupted database or connectivity issue. Either way the algorithm defaults back to a state of being a clean slate with no idea how to proceed. Even on their worst days a butcher or a plumber never forgets how to do their job, right?

The other maddening thing is that the AI peddlers try to convince everyone that teaching their software means we never have to learn ever again. After all, the algorithm has learned everything and can do it better than a human, right? That’s true, as long as the conditions don’t change appreciably. It reminds me of signature-based virus detection from years ago. As long as the infection matched the definition you could detect it. As soon as it changed the code and became polymorphous it was undetectable. That led to the rise of heuristic-based detections and eventually to the state of endpoint detection and response (EDR) we have today.

That’s a long way to say that the value in training someone to do a job isn’t in them gaining just the knowledge. It’s about training them to apply that knowledge in new situations and extrapolate from incomplete data. In the above article about the art of butchering, the author mentions that he was trained on a variety of animals and knows where the best cuts are for each. That took time and effort and practice. Today’s industrialized butcher operations train each person to make a specific cut. So the person cutting a ribeye steak doesn’t know how to make the cuts for ribs or cube steaks. They would need to be trained on that input in order to do the task. Not unlike modern AI.


Tom’s Take

You don’t pay a butcher for a steak. You pay them for knowing how to cut the best one. AI isn’t going to remove the need for professionals. It’s going to make some menial tasks easier to do but when faced with new challenges or the need to apply skills in an oblique way we’re still going to need to call on humans trained to think outside the box to do it without hours and days of running simulations. The human brain is still unparalleled in its ability to adapt to new stimuli and apply old lessons appropriately. Maybe you can train an AI to identify the best parts of the cow but I’ll take the butcher’s word for it.

Copilot Not Autopilot

I’ve noticed a trend recently with a lot of AI-related features being added to software. They’re being branded as “copilot” solutions. Yes, Microsoft Copilot was the first to use the name and the rest are just trying to jump in on the brand recognition, much like using “GPT” last year. The word “copilot” is so generic that it’s unlikely to be to be trademarked without adding more, like the company name or some other unique term. That made me wonder if the goal of using that term was simply to cash in on brand recognition or if there was more to it.

No Hands

Did you know that an airplane can land entirely unassisted? It’s true. It’s a feature commonly called Auto Land and it does exactly what it says. It uses the airports Instrument Landing System (ILS) to land automatically. Pilots rarely use it because of a variety of factors, including the need for minute last-minute adjustments during a very stressful part of the flight as well as the equipment requirements, such as a fairly modern ILS system. That doesn’t even mention that use of Auto Land snarls airport traffic because of the need to hold other planes outside ILS range to ensure only one plane can use it.

The whole thing reminds me of when autopilot is used on most flights. Pilots usually take the controls during takeoff and landing, which are the two more critical phases of flight. For the rest, autopilot is used a lot of the time. That’s the boring sections where you’re just flying a straight line between waypoints on your flight plan. That’s something that automated controls excel at doing. Pilots can monitor but don’t need to have their full attention on the readings every second of the flight.

Pilots will tell you that taking the controls for the approach and landing is just smart for many reasons, chief among them that it’s something they’re trained to do. More importantly, it places the overall control of the landing in the hands of someone that can think creatively and isn’t just relying on a script and some instrument readings to land. Yes, that is what ILS was designed to do but someone should always be there to ensure that what’s been sent is what should be followed.

Pilot to Copilot

As you can guess, the parallels in this process for using AI in your organization are a good match. AI may have great suggestions and may even come up with more novel ways of making you more productive but it’s not the only solution to your problems. I think the copilot metaphor is perfectly illustrated with the rush to have GPT chatbots write reports and articles last year.

People don’t like writing. At least, that’s the feeling that I got when I saw how many people were feeding prompts to OpenAI and having it do the heavy lifting. Not every output was good. Some of it was pretty terrible. Some of it was riddled with errors. And even the things that looked great still had that aura of something like the uncanny valley of writing. Almost right but somehow wrong.

Part of the reason for that was the way that people just assumed that the AI output was better than anything they could have come up with and did no further editing to the copy. I barely trust my own skills to publish something with minimal editing. Why would a trust a know-it-all computer algorithm? Especially with something that has technical content? Blindly accepting an LLM’s attempt at content creation is just as crazy as assuming that there’s no need to doublecheck math calculations if the result is outside of your expectations.

Copilot works for this analogy because copilots are there to help and to be a check against error. The old adage of “trust by verify” is absolutely the way they operate. No pilot would assume they were infallible and no copilot would assume everything the pilot said was right. Human intervention is still necessary in order to make sure that the output matches the desired result. The biggest difference today is that when it comes to AI art generation or content creation a failure to produce a desired result means wasted time. In a situation with an autopilot on an airline making a mistake in landing the results are more horrific.

People want to embrace AI to take away the drudgery of their jobs. It’s remarkably similar to how automation was going to take away our jobs before we realized it was really going to take away the boring, repetitive parts of what we do. Branding AI as “autopilot” will have negative consequences for adoption because people don’t like the idea of a computer or an algorithm doing everything for them. However, copilots are helpful and can take care of boring or menial tasks leaving you free to concentrate on the critical parts of your job. It’s not going to replace us as much as help us.


Tom’s Take

Terminology matters. Autopilot is cold and restrictive. Having a copilot sounds like an adventure. Companies are wise not to encourage the assumption that AI is going to take over jobs and eliminate workers. The key is that people should see the solution as offering a way to offload tasks and ask for help when needed. It’s a better outcome for the people doing the job as well as the algorithms that are learning along the way.