AI Should Be Concise

One of the things that I’ve noticed about the rise of AI is that everything feels so wordy now. I’m sure it’s a byproduct of the popularity of ChatGPT and other LLMs that are designed for language. You’ve likely seen it too on websites that have paragraphs of text that feel unnecessary. Maybe you’re looking for an answer to a specific question. You could be trying to find a recipe or even a code block for a problem. What you find is a wall of text that feels pieced together by someone that doesn’t know how to write.

The Soul of Wit

I feel like the biggest issue with those overly word-filled answers comes down to the way that people feel about unnecessary exposition. AI is built to write things on a topic and fill out word count. Much like a student trying to pad out the page length for a required report, AI doesn’t know when to shut up. It specifically adds words that aren’t really required. I realize that there are modes of AI content creation that value being concise but those are the default.

I use AI quite a bit to summarize long articles, many of which I’m sure were created with AI-assistance in the first place. AI is quite adept at removing the unneeded pieces, likely because it knows where there are inserted in the first place. It took me a while to understand why this bothered me so much. What is it about having a computer spend way too much time explaining answers to you that feels wrong?

Enterprise D Bridge

Then it hit me. It felt wrong because we already have a perfect example of what an intelligence should feel like when it answers you. It comes courtesy of Gene Roddenberry and sounds just like his wife Majel Barrett-Roddenberry. You’ve probably guessed that it’s the Starfleet computer system found on board every Federation starship. If you’ve watched any series since Next Generation you’ve heard the voice of the ship computer executing commands and providing information to the crew members, guests, and even holographic projections.

Why is the Star Trek computer a better example of AI behavior to me? In part because it provides information in the most concise manner possible. When the captain asks a question the answer is produced. No paragraphs necessary. No use of delve or convolutional needed. It produces the requested info promptly. Could you imagine a ship’s computer that drones on for three paragraphs before telling the first officer that the energy pulse is deadly and the shields need to be raised?

Quality Over Quantity

I’m sure you already know someone that thinks they know a lot about a subject and are more than happy to tell you about what they know. Do they tend to answer questions or explain concepts tersely? Or do they add in filler words and try to talk around tricky pieces in order to seem like they have more knowledge than they actually do? Can you tell the difference? I’m willing to be that you can.

That’s why GPT-style LLM content creation feels so soulless. We’re conditioned to appreciate precision. The longer someone goes on about something the more likely we are to either tune out or suspect it’s not an accurate answer. That’s actually a way that interrogators are trained to uncover falsehoods and lies. People stretching the truth are more likely to use more words in their statements.

There’s also more reasoning behind the padding. Think about how many ads are usually running on sites that have this kind of AI-generated content. Is it just a few? Or as many as possible inserted between every possible paragraph. It’s not unlike video sites like Youtube having ads inserted at certain points in the video. If you insert an additional ad in a video that is a minimum of twenty minutes how long do you think the average video is going to be for channels that rely on ad revenue? The actual substance of the content isn’t as important as getting those extra ad clicks.


Tom’s Take

It’s unlikely that my ramblings about ChatGPT is going to change things any time soon. I’d rather have the precision of Star Trek over the hollow content that creates yarns about family life before getting to the actual recipe. Maybe I’m in the minority. But I feel like my audience would prefer getting the results they want and doing away with the unnecessary pieces. Could this blog post have been a lot shorter and just said “Stop being so wordy”? Sure. But it’s long because it was written by a human.

Scotty Isn’t DevOps

I was listening to the most recent episode of our Gestalt IT On-Presmise IT Roundtable where Stephen Foskett mentioned one of our first episodes where we discussed whether or not DevOps was a disaster, or as I put it a “dumpster fire”. Take a listen here:

Around 13 minutes in, I have an exchange with Nigel Poulton where I mention that the ultimate operations guy is Chief Engineer Montgomery Scott of the USS Enterprise. Nigel countered that Scotty was the epitome of the DevOps mentality because his crazy ideas are what kept the Enterprise going. In this post, I hope to show that not only was Scott not a DevOps person, he should be considered the antithesis of DevOps.

Engineering As Operations

In the fictional biography of Mr. Scott, all he ever wanted to do was be an engineer. He begrudging took promotions but found ways to get back to the engine room on the Enterprise. He liked working starships. He hated building them. His time working on the transwarp drive of the USS Excelsior proved that in the third Star Trek film.

Scotty wasn’t developing new ideas to implement on the Enterprise. He didn’t spend his time figuring out how to make the warp engines run at increased efficiency. He didn’t experiment with the shields or the phasers. Most of his “miraculous” moments didn’t come from deploying new features to the Enterprise. Instead, they were the fruits of his ability to streamline operations to combat unforeseen circumstances.

In The Apple, Scott was forced to figure out a way to get the antimatter system back online after it was drained by an unseen force. Everything he did in the episode was focused on restoring functions to the Enterprise. This wasn’t the result of a failed upgrade or a continuous deployment scenario. The operation of his ship was impacted. In Is There No Truth In Beauty, Mr. Scott even challenges the designer of the Enterprise’s engines that he can’t handle them as well as Scotty. Mr. Scott was boasting that he was better at operations than a developer. Plain and simple.

In the first Star Trek movie, Admiral Kirk is pushing Scotty to get the Enterprise ready to depart in hours after an eighteen month refit. Scotty keeps pushing back that they need more time to work out the new systems and go on a shakedown cruise. Does that sound like a person that wants to do CI/CD to a starship? Or does it sound more like the caution of an operations person wanting to make sure patches are deployed in a controlled way? Every time someone in the series or movies suggested doing major upgrades or redesigns to the Enteprise, Scotty always warned against doing it in the field unless absolutely necessary.

Montgomery Scott isn’t the King of DevOps. He’s a poster child for simple operations. Keep the systems running. Deal with problems as they arise. Make changes only if necessary. And don’t monkey with the systems! These are the tried-and-true refrains of a person that knows that his expertise isn’t in building things but in making them run.

Engineering as DevOps

That’s not to say that Star Trek doesn’t have DevOps engineers. The Enterprise-D had two of the best examples of DevOps that I’ve ever seen – Geordi LaForge and Data. These two operations officers spent most of their time trying new things with the Enterprise. And more than a few crises arose because of their development aspirations.

LaForge and Data were constantly experimenting on the Enterprise in an attempt to make it run better. Given that the mission of the Enterprise-D did not have the same five-year limit as the original, they were expected to keep the technology on the Enterprise more current in space. However, their experiments often led to problems. Destabilizing the warp core, causing shield harmonics failures, and even infecting the Enterprise’s computer with viruses were somewhat commonplace during Geordi’s tenure as Chief Engineer.

Commander Data was also rather fond of finding out about new technology that was being developed and trying to integrate it into the Enterprise’s systems. Many times, he mentioned finding out about something being developed the the Daystrom Institute and wanting to see if it would work for them. Which leads me to think that the Daystrom Institute is the Star Trek version of Stack Overflow – copy some things you think will make everything better and hope it doesn’t blow up because you didn’t understand it.

Even if it was a plot convenience device, it felt like the Enterprise was often caught in the middle of applying a patch or an upgrade right when the action started. An exploding star or an enemy vessel always waited until just the right moment to put the Enterprise in harm’s way. Even Starfleet seemed to decide the Enterprise was the only vessel that could help after the DevOps team took the warp core offline to make it run 0.1% faster.

Perhaps instead of pushing forward with an aggressive DevOps mentality for the flagship of the Federation, Geordi and Data would have done better to take lessons from Mr. Scott and wait for appropriate windows to make changes and upgrades and quite tinkering with their ship so often that it felt like it was being held together by duct tape and hope.


Tom’s Take

Despite being fictional characters, Scotty, Geordi, and Data all represent different aspects of the technology we look at today. Scotty is the tried-and-true operations person. Geordi and Data are leading the charge to keep the technology fresh. Each of them has their strong points, but it’s hard to overlook Scotty as being a bastion of simple operations mentalities. Even when they all met together in Relics, Scotty was thinking more about making things work and less on making them fast or pretty or efficient. I think the push to the DevOps mentality would do well to take a seat and listen to the venerable chief engineer of the original Enterprise.

Networking Grows To Invisibility

Cat5

Networking is done. The way you have done things before is finished. The writing has been on the wall for quite a while now. But it’s going to be a good thing.

The Old Standard

Networking purchase models look much different today than they have in the past. Enterprises no longer buy a switch or a router. Instead, they buy solution packages. The minimum purchase unit is a networking pod or rack. Perhaps your proof-of-concept minimum is a leaf-spine of no less than 3 switches. Firewalls are purchased in pairs. Nowhere in networking is something simple any longer.

With the advent of software, even the deployment of these devices is different. Automation and orchestration systems provide provisioning as the devices are brought online. Network Monitoring Systems ensure the devices are operating correctly via API call instead of relying on SNMP. Analytics and telemetry systems can pull statistics on the fly and create datasets that give you insight into all manner of network traffic. The intelligence built into the platform supporting the hardware is more apparent than ever before.

Networking is no longer about fast connectivity speed. Instead, networking is about stability. Providing a transport network that stays healthy instead of growing by leaps and bounds every few years. Organizations looking to model their IT departments after service providers and cloud providers care more about having a reliable system than the most cutting edge technology.

This is nothing new in IT. Both storage and virtualization have moved in this direction for a while. Hardware wizardry has been replaced by software intelligence. Custom hardware is now merchant-based and easy to replace and build. The expertise in deployment and operations has more to do with integration and architecture than in simple day-to-day setup.

The New Normal

Where does that leave networkers? Are we a dying breed, soon to join the Unix admins of the word and telco experts on a beach in retirement? The reality is that things aren’t as dire for us as one might believe.

It is true that we have shifted our thinking away from operations and more toward system building. Rather than worry if the switch ports have been provisioned, we instead look at creating resilient constructs that can survive outages and traffic spikes. Networks are becoming the utility service we’ve always hoped they would be.

This is not the end. It’s the beginning. As networks join storage and compute as utilities in the data center, the responsibilities for our sphere of wizardry are significantly reduced. Rather than spending our time solving crazy user or developer problems, we can instead focus on the key points of stability and availability.

This is going to be a huge shift for the consumers of IT as well. As cloud models have already shown us, people really want to get their IT on their schedules. They want to “buy” storage and networking when it’s needed without interruption. Creating a utility resource is the best way to accomplish that. No longer will the blame for delays be laid at the feet of IT.

But at the same time, the safety net of IT will be gone as well. Unlike Chief Engineer Scott, IT can’t save the day when a developer needs to solve a problem outside of their development environment. Things like First Hop Reachability Protocols (FHRP), multipathing, and even vMotion contribute to bad developer behavior. Without these being available in a utility IT setup, application writers are going to have to solve their own problems with their own tools. While the network team will end up being leaner and smarter, it’s going to make everything run much more smoothly.


Tom’s Take

I live for the day when networking is no different than the electrical grid. I would rather have a “dumb” network that provides connectivity rather than hoping against hope that my “smart” network has all the tricks it needs to solve everyone’s problem. When the simplicity of the network is the feature and we don’t solve problems outside the application stack, stability and reliability will rule the day.