The Keynote Answers You Expect

Keynote Starfield

Good morning! How are you?

I’d like to talk about keynotes, again. You know, one of my favorite subjects. I’ve been watching them intently for the past few years just hoping that we’re going to see something different. As a technical analyst and practitioner I love to see and hear the details behind the technology that drive the way our IT companies develop. Yet every year I feel more and more disappointed by the way that keynotes take everything and push it into the stratosphere to get an 80,000 foot view of the technology. It’s almost like the keynotes aren’t written for practitioners. Why? The answer lies in the statement at the top of this post.

Perfunctory Performances

When most people ask someone how their day is going they’re not actually looking for a real response. They most certainly aren’t asking for details on how exactly the person’s day is going. They’re usually looking for one of two things:

  1. It’s going great.
  2. It could be better.

Any more than that drags someone down into a conversation that they don’t want to have. Asking someone about their day is a polite way of acknowledging them and making a bit of small talk. The person asking the question almost always doesn’t care. Think back to a time when you asked that question and someone unloaded on you with all their issues like a car acting up or a baby that wouldn’t sleep through the night. Did you actually want to know that? Or were you really trying to avoid awkward silence during a transaction?

That same rule applies to a keynote address. CEOs and leaders have a ton of information they would like to share with the world. They want to talk about their advantages and their investments and how they plan on being the best company in the market next quarter. However, the audience is like the above example. They don’t care about the details in the answer. They really only want to hear two things:

  1. The company is doing great.
  2. We made some stuff that will make us better.

That’s it. That’s the only two things you need to say during a keynote to keep the audience happy. Boil every keynote you’ve ever watched down to the minimum and you’ll see that right there. Even when the company hasn’t been doing so well it’s always framed as a path to getting better. If the company doesn’t have something super exciting to show you they’ll either dress up something they’ve have for a while or talk about new partnerships that will deliver The Thing that everyone wants to hear about.

You may think to yourself that this is silly. You are the one that wants to hear about the technical implementation details and the integrations. You want to understand how this fancy new AI/ML/VR/AR/OMG/WTF implementation works. I’m right there with you, friend. But I have some bad news for you. We aren’t the audience for a keynote.

Audience Participation

Who is the audience for a keynote? It’s an easy question to answer for any company anywhere. Just look at who i sitting in the front section in the middle of the room. Keynotes are designed to appeal to exactly two groups of people, not counting company employees:

  1. Investors
  2. Analysts

That’s it. The peanut gallery behind that section couldn’t matter any less. Sure, they’ll clap when some new announcement gets made. Or they’ll enjoy the slick video that has been put together by the marketing team. Unless the company is trying to set some kind of tone with a huge audience those people behind the investors and analysts might as well not even exist. You want proof? Why is the Steve Jobs Theater at Apple’s HQ only a 1,000 seat room? Even with millions of Apple fans out there? Because they only care about analysts and investors. Just like every other company.

When you realize this fact you note why keynotes are structured the way they are. Investors only want to hear the company is doing well. Their investment is protected and they will make money. How? With these new things we’re going to show you. Likewise, analysts like hearing the company isn’t going to go out of business next quarter but it’s the tech that gets them excited. But analysts are usually specialized enough that they only care about two or three things in a big keynote. They’re more likely to want to pull someone aside and ask them more in-depth stuff after the big show as opposed to getting all the big details on stage when they’re having a hard time keeping up with the announcements anyway.


Tom’s Take

Because these two groups only want to hear those two specific kinds of answers that’s all the keynote is going to provide. It’s just like someone asking how your day is going. Once you know they don’t really care to hear any of the details you start answering with simple statements designed to mollify them and no more. Why bother making someone uncomfortable with the details when they don’t really want to know them anyway? Better to just stick to the script and keep them happy. Honestly, I’m at the point where I realize that keynotes aren’t made for me. I’d rather find the time to talk to someone in the hallway later to learn the real details as opposed to the choreographed performance for the audience in the front row. Maybe then they’ll tell me how their day is actually going.

The Legacy of Cisco Live

Legacy: Something transmitted by or received from an ancestor or predecessor or from the past. — Merriam-Webster

Cisco Live 2024 is in the books. I could recap all the announcements but that would take forever. You can find an AI that can summarize them for you much faster. That’s because AI was the largest aspect of what was discussed. Love it or hate it, AI has taken over the IT industry for the time being. More importantly it has also focused companies on the need to integrate AI functions into their product lines to avoid being left behind by upstarts.

That’s what you see in the headlines. Something I noticed while I was there was how the march of time has affected us all. After eighteen years I finally realized the sessions today have less in common with the ones I was attending back in 2010 than ever before. Development and advanced features configuration have replaced the tuning of routing protocols and CallManager deployment tips. It’s a game for younger engineers that have less to unlearn from the legacy technologies I’ve spent my career working on.

Leaving a Legacy

But legacy is a word with more than one definition. It’s easy to think of legacy as old technology or technical debt. But it can also be something you leave to the next generation, as the definition at the top of this post says. What we leave behind for those we teach and lead is as important as any system out there. Because those lessons persist long after the technology has fallen away.

For the first time that I could remember, my friends were bringing their kids to the show. Not to enjoy a vacation or to hang out by the pool. They were coming because it was time for them to step forward and lean and make connections in the industry. Folks like Jody Lemoine, Rita Younger, Martin Duggan, and Brandon Carroll shared the passion and excitement of Cisco Live with their older children as a way to help them grow.

We’re not done with our careers yet but we are at the point where it’s time to show those behind us the path. It is no longer a race to consume knowledge as quickly as possible and put it into use. It’s about helping people by leveraging our legacy to teach them and help them along the way. Our group welcomed the kids with open arms. We talked to them, shared our perspectives, and made them feel welcome. We showed them the same courtesy that was shown to use years before.

Inspiring Others

The legacy of Cisco Live is more than just teaching the next generation. It’s seeing the way that the conference has transformed. I will admit that my activity on social media is a pale comparison of what it used to be. The face of Cisco Live is now influencers like Lexie Cooper and Alexis Bertholf that have embraced new platforms and found their voice to share content with others in a way is comfortable for them to consume it. The number of people that want to read a long blog post is waning. Concepts are communicated in short bursts. That’s where the next generation excels.

Seeing people running across the show floor to meet new creators like Kevin Nanns reminded me of a time when I was doing the same thing. I wanted to know everyone that I could to learn as much as possible. Now I get to see others doing the same and smile. New face are meeting their heroes and building their communities. The process continues no matter the platform. People find their voice and share with others. Whether it’s a podcast or TikTok or a casual conversation over lunch. It’s about making those connections and keeping them going.


Tom’s Take

That’s where I started. That’s why I do it. To meet new people and help them build a community. I have my community of wonderful Cisco Live people. I have Tech Field Day. I have The Corner, which is my most lasting Cisco Live legacy. I’m excited to see so many people passing their legacy along to the next generation. I love seeing new faces in the creator space popping up to share their stories and their journeys. Cisco Live will be in San Diego in 2025 and I can’t wait to see who shows up and what legacy they’ll leave.

Butchering AI

I once heard a quote that said, “The hardest part of being a butcher is knowing where to cut.” If you’ve ever eaten a cut of meat you know that the difference between a tender steak and a piece of meat that needs hours of tenderizing is just inches apart. Butchers train for years to be able to make the right cuts in the right pieces of meat with speed and precision. There’s even an excellent Medium article about the dying art of butchering.

One thing that struck me in that article is how the art of butchering relates to AI. Yes, I know it’s a bit corny and not an easy segue into a technical topic but that transition is about as subtle as the way AI has come crashing through the door to take over every facet of our lives. It used to be that AI was some sci-fi term we used to describe intelligence emerging in computer systems. Now, AI is optimizing my PC searches and helping with image editing and creation. It’s easy, right?

Except some of those things that AI promises to excel at doing are things that professionals have spent years honing their skills at performing. Take this article announcing the release of the Microsoft CoPilot+ PC. One of the things they are touting as a feature is using neural processing units (NPUs) to allow applications to automatically remove the background from an image in a video clip editor. Sounds cool, right? Have you ever tried to use an image editor to remove or blur the background of an image? I did a few weeks ago and it was a maddening experience. I looked for a number of how-to guides and none of them had good info. In fact, most of the searches just led me to apps that claimed to use some form of AI to remove the background for me. Which isn’t what I wanted.

Practice Makes Perfect

Bruce Lee said, “I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.” His point was that practice of a single thing is what makes a professional stand apart. I may know a lot about history, for example, but I’ll never be as knowledgeable about Byzantine history as someone who has spent their whole career studying it. Humans develop skills via repetition and learning. It’s how our brains are wired. We pick out patterns and we reinforce them.

AI attempts to simulate this pattern recognition and operationalize it. However, the learning process that we have simulated isn’t perfect. AI can “forget” how to do things. Sometimes this is built into the system with something like unstructured learning. Other times it’s a failure of the system inputs, such as a corrupted database or connectivity issue. Either way the algorithm defaults back to a state of being a clean slate with no idea how to proceed. Even on their worst days a butcher or a plumber never forgets how to do their job, right?

The other maddening thing is that the AI peddlers try to convince everyone that teaching their software means we never have to learn ever again. After all, the algorithm has learned everything and can do it better than a human, right? That’s true, as long as the conditions don’t change appreciably. It reminds me of signature-based virus detection from years ago. As long as the infection matched the definition you could detect it. As soon as it changed the code and became polymorphous it was undetectable. That led to the rise of heuristic-based detections and eventually to the state of endpoint detection and response (EDR) we have today.

That’s a long way to say that the value in training someone to do a job isn’t in them gaining just the knowledge. It’s about training them to apply that knowledge in new situations and extrapolate from incomplete data. In the above article about the art of butchering, the author mentions that he was trained on a variety of animals and knows where the best cuts are for each. That took time and effort and practice. Today’s industrialized butcher operations train each person to make a specific cut. So the person cutting a ribeye steak doesn’t know how to make the cuts for ribs or cube steaks. They would need to be trained on that input in order to do the task. Not unlike modern AI.


Tom’s Take

You don’t pay a butcher for a steak. You pay them for knowing how to cut the best one. AI isn’t going to remove the need for professionals. It’s going to make some menial tasks easier to do but when faced with new challenges or the need to apply skills in an oblique way we’re still going to need to call on humans trained to think outside the box to do it without hours and days of running simulations. The human brain is still unparalleled in its ability to adapt to new stimuli and apply old lessons appropriately. Maybe you can train an AI to identify the best parts of the cow but I’ll take the butcher’s word for it.

On Open Source and Volunteering

I saw a recent post on LinkedIn from Alex Henthorn-Iwane that gave me pause. He was talking about how nearly 2/3rds of Github projects are maintained by one or two people. He also quoted some statistics around how projects are maintained by volunteers and unpaid members as opposed to more institutional support from people getting paid to do the work. It made me reflect on my own volunteering journey and how the parallels between open source and other organizations aren’t so different after all.

A Hour A Week

Most of my readers know that one of my passion projects outside of Tech Field Day and this humble blog is the involvement of my children in Scouting. I spend a lot of my free time volunteering as a leader and organizer for various groups. I get to touch grass quite often. At least I do when I’m not stuck in meetings or approving paperwork.

One of the things that struck me in Alex’s post was how he talked about the lack of incoming talent to help with projects as older maintainers are aging out. We face a similar problem in scouting. Rather than our volunteers getting too old to do the work we face the issue of the kids aging out. When the kids leave the program through hitting age limits or through growing bored with the program their parents usually go with them. Since those parents are the source of our volunteers we quickly have gaps where our most promising leaders are gone after only a couple of years. Only the most dedicated volunteers stick around after their kids have moved on.

Recruiting people to be a part of the fun, whether a project or an organization, is hard. People have even less time now than they did a few years ago. It could be social media or binge watching TV or doing the work of an extra person or two but finding help is almost impossible. One of the ways that we’ve tried to bridge that gap is to make sure that people that want to help aren’t overwhelmed. We give them little jobs to do to help get them into the flow of things before asking them to do more. That would translate well to open source projects. Give people small tasks or little modules to work on instead of throwing them in other the deep end of the pool with no warning. That’s a quick way to alienate your volunteers. It also keeps them from burning themselves out quickly.

We ease them in by saying “it’s only an hour a week”. Realistically it’s more like two or three hours per week to start. However, if you try to burden people with too much all at once they will run away and never look back. Even if the developers are overwhelmed and need the help they need to understand that shifting the load to other volunteers isn’t a sudden thing. It takes time to slowly move over tasks and evaluate how people are doing before letting them shoulder more of the load.

My Way or the Highway

The other volunteer issue that I run into is the people who are entrenched in what they do. This applies greatly to the people that are the die-hard maintainers of a project. They have their way of doing things and that’s how it’s going to be. Just take a stroll through any Linux kernel mailing list thread and see how those tried-and-true things are encouraged, or in some cases enforced.

I’m all for having structure and a measured approach to how things are done. Where it causes problems for people is when that structure takes precedence over common sense. In my volunteer work I’ve seen a number of old timers who tell me that “this is the way it’s done” or “my way works” when it clearly doesn’t or can lead to other problems. Worse yet, when challenged those people tend to clam up and decide that anyone that disagrees with them should just leave or get with the program. It leads to hard feelings and zero desire to want to help out in the future. The well is poisoned not only for that person but for anyone that knows about the story of how they were rejected or marginalized.

People that are shouldering the load want help. Even if they’re so set in their ways that they can’t conceive of a different way to do it we still need to offer our help. What we need to realize on our side is that their way has worked for them for all this time. We don’t need to come crashing through the front door and trying to upset everything they’ve worked hard to accomplish. Instead, we need to ask questions that help us understand the process and make suggestions where appropriate instead of demands that must be met. My Way or the Highway doesn’t work in either direction. Compromise is the key to accomplishing our mutual goals.


Tom’s Take

Writing an open source library isn’t like taking a group camping in the woods. However, the process isn’t totally foreign. A group of dedicated people are doing something that is thankless but could end up changing lives. We’re always overworked and we want people to help. We just need them to understand why we do things the way we do them. And if that means pushing back it’s up to us to make sure we don’t scare anyone off that is genuinely interested in helping out. All volunteer work lives and dies based on who is helping us accomplish the end goal. Don’t get hung up on the details when evaluating those that choose to give of their time for you.

Copilot Not Autopilot

I’ve noticed a trend recently with a lot of AI-related features being added to software. They’re being branded as “copilot” solutions. Yes, Microsoft Copilot was the first to use the name and the rest are just trying to jump in on the brand recognition, much like using “GPT” last year. The word “copilot” is so generic that it’s unlikely to be to be trademarked without adding more, like the company name or some other unique term. That made me wonder if the goal of using that term was simply to cash in on brand recognition or if there was more to it.

No Hands

Did you know that an airplane can land entirely unassisted? It’s true. It’s a feature commonly called Auto Land and it does exactly what it says. It uses the airports Instrument Landing System (ILS) to land automatically. Pilots rarely use it because of a variety of factors, including the need for minute last-minute adjustments during a very stressful part of the flight as well as the equipment requirements, such as a fairly modern ILS system. That doesn’t even mention that use of Auto Land snarls airport traffic because of the need to hold other planes outside ILS range to ensure only one plane can use it.

The whole thing reminds me of when autopilot is used on most flights. Pilots usually take the controls during takeoff and landing, which are the two more critical phases of flight. For the rest, autopilot is used a lot of the time. That’s the boring sections where you’re just flying a straight line between waypoints on your flight plan. That’s something that automated controls excel at doing. Pilots can monitor but don’t need to have their full attention on the readings every second of the flight.

Pilots will tell you that taking the controls for the approach and landing is just smart for many reasons, chief among them that it’s something they’re trained to do. More importantly, it places the overall control of the landing in the hands of someone that can think creatively and isn’t just relying on a script and some instrument readings to land. Yes, that is what ILS was designed to do but someone should always be there to ensure that what’s been sent is what should be followed.

Pilot to Copilot

As you can guess, the parallels in this process for using AI in your organization are a good match. AI may have great suggestions and may even come up with more novel ways of making you more productive but it’s not the only solution to your problems. I think the copilot metaphor is perfectly illustrated with the rush to have GPT chatbots write reports and articles last year.

People don’t like writing. At least, that’s the feeling that I got when I saw how many people were feeding prompts to OpenAI and having it do the heavy lifting. Not every output was good. Some of it was pretty terrible. Some of it was riddled with errors. And even the things that looked great still had that aura of something like the uncanny valley of writing. Almost right but somehow wrong.

Part of the reason for that was the way that people just assumed that the AI output was better than anything they could have come up with and did no further editing to the copy. I barely trust my own skills to publish something with minimal editing. Why would a trust a know-it-all computer algorithm? Especially with something that has technical content? Blindly accepting an LLM’s attempt at content creation is just as crazy as assuming that there’s no need to doublecheck math calculations if the result is outside of your expectations.

Copilot works for this analogy because copilots are there to help and to be a check against error. The old adage of “trust by verify” is absolutely the way they operate. No pilot would assume they were infallible and no copilot would assume everything the pilot said was right. Human intervention is still necessary in order to make sure that the output matches the desired result. The biggest difference today is that when it comes to AI art generation or content creation a failure to produce a desired result means wasted time. In a situation with an autopilot on an airline making a mistake in landing the results are more horrific.

People want to embrace AI to take away the drudgery of their jobs. It’s remarkably similar to how automation was going to take away our jobs before we realized it was really going to take away the boring, repetitive parts of what we do. Branding AI as “autopilot” will have negative consequences for adoption because people don’t like the idea of a computer or an algorithm doing everything for them. However, copilots are helpful and can take care of boring or menial tasks leaving you free to concentrate on the critical parts of your job. It’s not going to replace us as much as help us.


Tom’s Take

Terminology matters. Autopilot is cold and restrictive. Having a copilot sounds like an adventure. Companies are wise not to encourage the assumption that AI is going to take over jobs and eliminate workers. The key is that people should see the solution as offering a way to offload tasks and ask for help when needed. It’s a better outcome for the people doing the job as well as the algorithms that are learning along the way.

User Discomfort As A Security Function

If you grew up in the 80s watching movies like me, you’ll remember Wargames. I could spend hours lauding this movie but for the purpose of this post I want to call out the sequence at the beginning when the two airmen are trying to operate the nuclear missile launch computer. It requires the use of two keys, one each in the possession of one of the airmen. They must be inserted into two different locks located more than ten feet from each other. The reason is that launching the missile requires two people to agree to do something at the same time. The two key scene appears in a number of movies as a way to show that so much power needs to have controls.

However, one thing I wanted to talk about in this post is the notion that those controls need to be visible to be effective. The two key solution is pretty visible. You carry a key with you but you can also see the locks that are situated apart from each other. There is a bit of challenge in getting the keys into the locks and turning them simultaneously. That not only shows that the process has controls but also ensures the people doing the turning understand what they’re about to do.

Consider a facility that is so secure that you must leave your devices in a locker or secured container before entering. I’ve been in a couple before and it’s a weird feeling to be disconnected from the world for a while. Could the facility do something to ensure that the device didn’t work inside? Sure they could. Technology has progressed to the point where we can do just about anything. But leaving the device behind is as much about informing the user that they aren’t supposed to be sharing things as it is about controlling the device. Controlling a device is easy. Controlling a person isn’t. Sometimes you have to be visible.

Discomfort Design

Security solutions that force the user out of a place of comfort are important. Whether it’s a SCIF for sharing sensitive data or forcing someone to log in with a more secure method the purpose of the method is about attention. You need the user to know they’re doing something important and understand the risks. If the user doesn’t know they’re doing something that could cause problems or expose something crucial you will end up doing damage control at some point.

Think of something as simple as sitting in the exit row on an airplane. In my case, it’s for Southwest Airlines. There’s more leg room but there’s also a responsibility to open the door and assist in evacuation if needed. That’s why the flight attendants need to hear you acknowledge that warning with a verbal “yes” before you’re allowed to sit in those seats. You have admitted you understand the risks and responsibilities of sitting there and you’re ready to do the job if needed.

Security has tried to become unobtrusive in recent years to reduce user friction. I’m all about features like using SSL/TLS by default in websites or easing restrictions on account sharing or even using passkeys in place of passwords. But there also comes a point when encapsulating the security reduces its effectiveness. What about fishing emails that put lock emojis next to URLs to make they seem secure even when they aren’t? How about cleverly crafted login screens for services that are almost indistinguishable from the real thing unless you bother to check the URL? It could even be the tried-and-true cloned account on Facebook or Instagram asking a friend for help unlocking their account only to steal your login info and start scamming everyone on your friends list.

The solution is to make users know they’re secure. Make it uncomfortable for them so they are acutely aware of heightened security. We deal with it all the time in other areas of our lives outside of IT. Airport screenings are a great example. So are heightened security measures at federal buildings. You know you’re going somewhere that has placed an emphasis on security.

Why do we try to hide it in IT? Is it because IT causes stress due to it being advanced technology? Are we worried that users are going to drop our service if it is too cumbersome to use the security controls? Or do we think that the investment in making that security front and center isn’t worth the risk of debugging it when it goes wrong? I would argue that these are solved problems in other areas of the world and we have just accepted them over time. IT shouldn’t be any different.

Note that discomfort shouldn’t lead to a complete lack of usability. It’s very easy to engineer a system that needs you to reconfirm your credentials every 10 minutes to ensure that no one has hacked you. And you’d quit using it because you don’t want to type in a password that often. You have to strike the right balance between user friendly and user friction. You want them to notice they’re doing something that needs their attention to security but not so much that they’re unable to do their job or use the service. That’s where the attention should be placed, not in cleverly hiding a biometric scanning solution or certificate-based service for the sake of saying it’s secure.


Tom’s Take

I’ll admit that I tend to take things for granted. I had to deal with a cloned Facebook profile this past weekend and I worried that someone might try to log in and do something with my account. Then I remembered that I have two-factor authentication turned on and my devices are trusted so no one can impersonate me. But that made me wonder if the “trust this device” setting was a bit too easy to trust. I think making sure that your users know they’re protected is more critical. Even if it means they have to do something more performative from time to time. They may gripe about changing a password every 30 days or having to pull out a security token but I promise you that discomfort will go away when it saves them from a very bad security day.

Human Generated Questions About AI Assistants

I’ve taken a number of briefings in the last few months that all mention how companies are starting to get into AI by building an AI virtual assistant. In theory this is the easiest entry point into the technology. Your network already has a ton of information about usage patterns and trouble spots. Network operations and engineering teams have learned over the years to read that information and provide analysis and feedback.

If marketing is to be believed, no one in the modern world has time to learn how to read all that data. Instead, AI provides a natural language way to ask simple questions and have the system provide the data back to you with proper context. It will highlight areas of concern and help you grasp what’s going on. Only you don’t need to get a CCNA to get there. Or, more likely, it’s more useful for someone on the executive team to ask questions and get answers without the need to talk to the network team.

I have some questions that I always like to ask when companies start telling me about their new AI assistant that help me understand how it’s being built.

Question 1: Laying Out LLMs

My first question is always:

Which LLM are you using to power your system?

The reason is because there are only two real options. You’re either paying someone else to do it as a service, like OpenAI, or you’re pulling down your own large language model (LLM) and building your own system. Both have advantages and disadvantages.

The advantage of a service-based offering is that you don’t need to program anything. You just feed the data to the LLM and it takes off. No tuning needed. It’s fast and universally available.

The downside of a service based model is the fact that it costs money. And if you’re using it commercially it’s going to cost more than a simple monthly fee. The more you use it, the more expensive it gets. If your vendor is pulling thousands of daily requests from the LLM is that factored into the fee they’re charging you? What happens when the OpenAI prices go up?

The advantages of building your own system are that you have complete control over the way the data is being processed. You tune the LLM and you own the way it’s being used. No need to pay more to someone else to do all the work for you. You can also decide how and when features are implemented. If you’re updating the LLM on your schedule you can include new features when they’re ready and not when OpenAI pushes them live and makes them available for everyone.

The disadvantages of building your own system involves maintenance. You have to update and patch it. You have to figure out what features to develop. You have to put in the work. And if the model you use goes out of support or is no longer being maintained you have to swap to something new and hope that all your functions are going to work with the new one.

Question 2: Data Sources

My second question:

Where does the LLM data come from?

May seem simple at first, right? You’re training your LLM on your data so it gives you answers based on your environment. You’d want that to be the case so it’s more likely to tell you things about your network. But that insight doesn’t come out of thin air. If you want to feed your data to the LLM to get answers you’re going to have to wait while it studies the network and comes up with conclusions.

I often ask companies if they’re populating the system with anonymized data from other companies to provide baselines. I’ve seen this before from companies like Nyansa, which was bought by VMware, and Raza Networks, while is part of HPE Aruba. Both of those companies, which came out long before the current AI craze, collected data from customers and used it to build baselines for everyone. If you wanted to see how you compared to other high education or medical verticals the system could tell you what those types of environments looked like, with the names obscured of course.

Pre-populating the LLM with information from other companies is great if your stakeholders want to know how they fare against other companies. But it also runs the risk of populating data that shouldn’t be in the system. That could create situations where you’re acting on bad information or chasing phantoms in the organization. Worse yet, your own data could be used in ways you didn’t intend to feed other organizations. Even with the names obscured someone might be able to engineer a way to obtain knowledge about your environment you don’t want everyone to have.

Question 3: Are You Seeing That?

My third question:

How do you handle hallucinations?

Hallucination is the term for when the AI comes up with an answer that is false. That’s right, the super intelligent system just made up an answer instead of saying “I don’t know”. Which is great if you’re trying to convince someone you’re smart or useful. But if the entire reason why I’m using your service is accurate answers about my problems I’d rather have you say you don’t have an answer or you need to do research instead of giving me bad data that I use to make bad decisions.

If a company tells me they don’t really see hallucinations then I immediately get concerned, especially if they’re leveraging OpenAI for their LLM. I’ve talked before about how ChatGPT has a really bad habit of making up answers so it always looks like it knows everything. That’s great if you’re trying to get the system to write a term paper for you. It’s really bad if you try to reroute traffic in your network around a non-existent problem. I know there are many systems out there that can help reduce hallucinations, such as retrieval augmented generation (RAG), but I need that to be addressed up front instead of a simple “we don’t see hallucinations” because that makes me feel like something is being hidden or glossed over.


Tom’s Take

These aren’t the only questions you should be asking about AI and LLMs in your network but they’re not a bad start. They encompass the first big issues that people are likely to run into when evaluating an AI system. How do you do your analysis? What is happening with my data? What happens when the system doesn’t know what to do? Sure, there’s always going to be questions about cost and lock-in but I’d rather know the technology is sound before I ever try to deploy the system. You can always negotiate cost. You can’t negotiate with a flaw AI.

Repetition Without Repetition

I just finished spending a wonderful week at Cisco Live EMEA and getting to catch up with some of the best people in the industry. I got to chat with trainers like Orhan Ergun and David Bombal and see how they’re continuing to embrace the need for people in the networking community to gain knowledge and training. It also made me think about a concept I recently heard about that turns out to be a perfect analogy to my training philosophy even though it’s almost 70 years old.

Practice Makes Perfect

Repetition without repetition. The idea seems like a tautology at first. How can I repeat something without repeating it. I’m sure that the people in 1967 that picked up the book by Soviet neurophysiologist Nikolai Aleksandrovitsch Bernstein were just as confused. Why should you do things over and over again if not to get good at performing the task or learning the skill?

The key in this research from Bernstein lay in how the practice happens. In this particular case he looked at blacksmiths to see how they used hammers to strike the pieces they were working on. The most accurate of his test subjects didn’t just perform the same movements over and over again. Instead, they had some variability in their skill that allowed them to be more accurate or efficient over time. They weren’t just going through the motions, as it were. They were adapting their motions to the need at the moment. This allowed them to adjust their aim if the piece had moved or needed a lighter touch in an area that was thinned too quickly.

Bernstein said this about the way that the blacksmiths practiced their art:

“The process of practice towards the achievement of new motor habits essentially consists in the gradual success of a search for optimal motor solutions to the appropriate problems. Because of this, practice, when properly undertaken, does not consist in repeating the means of solution of a motor problem time after time, but in the process of solving this problem again and again by techniques which we changed and perfected from repetition to repetition. It is already apparent here that, in many cases, ‘practice is a particular type of repetition without repetition’…”

The quote above illustrates a big shift in thinking for people who play sports or perform some kind of task. Instead of merely repeating the movements over and over again until perfection (the ‘means of the solution’) you should instead focus on solving the problem over and over again and adapting your skill to that end. It sounds silly and somewhat pedantic, but the key is in the shift of thinking. For basketball players, it’s not about perfecting your spin move to get around a defender. It’s about understanding the need to get around the defender and how best to accomplish that for different kinds of people defending you.

Avoiding Autopilot

Most of the content you’ll see around the concept of repetition without repetition is for sports players practicing skills. However, I think the concept extends perfectly to the IT certification space and troubleshooting skillset as well. There are a number of important things that we need to learn in order to do our jobs or earn a specialization but we need to remember that the goal is to solve problems and show mastery, not to memorize commands and perform them like a simple batch file.

Here’s a perfect example that I’m very guilty of doing. When you log into a Cisco router to do something, what do you normally do first when you get to the CLI prompt? You almost always need to be in privileged EXEC mode, right? That’s the enable command. When we want to configure something on the router we usually have to be in the router configuration mode, which is the configure terminal command. So far, so good, right? Most of you have already picked up on the fact that you can shorten those commands to save time typing out the whole name, which is an important skill to have when you’re configuring a series of devices or trying to do it in a short timeframe. So enable, configure terminal instead becomes en, conf t. It’s like muscle memory at this point.

How many times have you logged into router to check the routing table and accidentally typed in en, conf t from muscle memory only to remember that the routing table has to be displayed from EXEC mode, not configuration mode? You chide yourself for typing in conf t and back out to look at the table. But what you’ve really done is shown the power and drawbacks of repetition. If you spend hours upon hours typing in the same commands over and over again you will type them in the same way every time. So much so, in fact, that you forget that you’re doing it until you realize you put something in that you shouldn’t. You knew when you logged in that you wanted to display the routing table. You knew that was available in EXEC mode. And yet your brain and fingers automatically typed the same commends you always type when you log into the router.

The idea of repetition without repetition says that we need to consider the how of solving a problem and the skills needed above and beyond the simple skills themselves. Sure, there may only be one or two commands the achieve a desired output or effect but we should know how the both impact the performance of the device or how they can impact the outcome of a situation. This is especially important for exams that like to restrict your ability to use specific commands or are written to direct you in a specific line of thinking. Anyone who has ever taken the CCIE lab exam knows how this works. They restrict you from using common commands or give you a question with two possible answers only to limit that to one with a later requirement. The test asks you to configure something in an earlier section and then gives you a task that can undo that configuration if you’re not aware of how it interacts with everything else. If you’ve ever created a routing redistribution loop on accident you know what that feels like.

The Indictment of AI

In a way, repetition without repetition is the key of what makes a person an apt problem solver. By approaching problems with a mindset and not just a skillset you open your world to new possibilities and considerations. You know there is more than one way to skin a cat, as the old saying goes. You’re smarter than an artificial intelligence, which only works within a set of bounds with skills and apparent intelligence that repeats what it’s told or uses a very narrow focus every time to provide consistent results.

Computer programs and algorithms are dumb because they will solve the problem the same way each time they are executed. People will solve the problem and then start analyzing the results to find new, better, and faster ways to implement solutions. That’s the heart of learning. It’s not just performing the subtasks of the skill to perfection every time. It’s about learning how to implement them in a better way each time and arrive at better solutions to problems with the variables are changed. It’s why the human mind that has been adapted for centuries and millennia to look for patterns can be tricked into adapting those patterns to new concepts and made to “grow up” by learning over time to adjust to new inputs or fresh data. That, more than anything, is why repetition without repetition makes us better than the AI we’re programming to eclipse us.


Tom’s Take

When I first heard of this concept I thought it was some new idea from sports science that was borne from modern research techniques. I was shocked to learn it was discovered before I was born and has roots in some of the oldest trades we can think of. What it proves is that the human mind and body are very wonderful things that react perfectly when challenged in the right way. The brain will adapt and overcome when presented with new inputs. The way we grow and improve ourselves is not wrote memorization or continuous skill repetition. Instead, if we internalize the importance of the outcome over the means of getting there we will find ourselves smarter and more able to be flexible when new challenges come our way.

A Handy Acronym for Troubleshooting

While I may be getting further from my days of being an active IT troubleshooter it doesn’t mean that I can’t keep refining my technique. As I spend time looking back on my formative years of doing troubleshooting either from a desktop perspective or from a larger enterprise role I find that there were always a few things that were critical to understand about the issues I was facing.

Sadly, getting that information out of people in the middle of a crisis wasn’t always super easy. I often ran into people that were very hard to communicate with during an outage or a big problem. Sometimes they were complicit because they made the mistake that caused it. They also bristled at the idea of someone else coming to fix something they couldn’t or wouldn’t. Just as often I ran into people that loved to give me lots of information that wasn’t relevant to the issue. Whether they were nervous talkers or just had a bad grasp on the situation it resulted in me having to sift through all that data to tease out the information I needed.

The Method

Today, as I look back on my career I would like to posit an idea of collecting the information that you need in order to effectively troubleshoot an issue.

  • Scope: How big is this problem? Is it just a single system or is it an entire building? Is it every site? If you don’t know how widespread the problem is you can’t really begin to figure out how to fix it. You need to properly understand the scope. That also includes understanding what the scope of the system for the business is. Taking down a reservation system for an airline is a bigger deal that guest Wi-Fi being down at a restaurant.
  • Timeline: When did this start happening? What occurred right before? Were there any issues that you think might have contributed here. It’s important to make the people you’re working with understand that a proper timeline is critical because it allows you to eliminate issues. You don’t want to spend hours trying to find the root cause in one system only to learn it wasn’t even powered on at the time and the real cause is in a switch that was just plugged in.
  • Frequency: Is this the first time this has happened? Does it happen randomly or seemingly on a schedule? This one helps you figure out if it’s systemic and regular or just cosmic rays. It also forces your team or customers to think about when it’s occurring and how far back the issue goes. If you come in thinking it’s a one-off that happened yesterday only to find out it’s actually been happening for weeks or months you’ll take a much different approach.
  • Urgency: Is this an emergency? Are we talking about a hospital ER being down or a typo in a documentation page? Do I need to roll out to spend the whole night fixing this or is it something that I can look at on a scheduled visit. Be sure to note the reasoning behind why they choose to make it a priority too. Some customers love to make everything a dire emergency just to ensure they get someone out right away. At least until it’s time to pay the emergency call rate.

A four step plan that’s easy to remember. Scope, Timeline, Frequency, Urgency. STFU.

Memory Aids

Okay, you can stop giggling now. I did that on purpose. In part to help you remember what the acronym was. In part to help you take a big of a relaxed approach to troubleshooting. In, in some ways, to help you learn to get those chatterboxes and pushy stakeholders off your back. If your methodology includes STFU they might figure out quickly that you need to be the one doing the talking and they need to be the one giving the answers, not the other way around.

And yes, each of these little steps would have saved me so much time in my old role. For example:

  • Scope – Was the whole network down? Or did one of the kids just unplug your Ethernet cable?
  • Frequency – Has this server seriously been beeping every 30 seconds for the last two years? Did you bother to look at the error message?
  • Timeline – Yes, I would assume that when you put that lab switch into your network was when the problem with VTP started.
  • Urgency – Do you really need me to drive three hours to press the F1 key on a keyboard?

I seriously have dozens of examples but these are four of the stories I tell all of the time to show just how some basic understanding can help people do more than they think.


Tom’s Take

People love mnemonic devices to remember things. Whether it’s My Very Eager Mother Just Served Us Nine (Pizzas) to remember the 8 planets and that one weird one or All People Seem To Need Data Processing to remember the seven layers of the OSI Model. I remember thinking through the important need-to-know information for doing some basic initial troubleshooting and how easily it fit into an acronym that could be handy for other things too when you’re in a stressful situation. Feel free to use it.

Painless Progress with My Ubiquiti Upgrade

I’m not a wireless engineer by trade. I don’t have a lab of access points that I’m using to test the latest and greatest solutions. I leave that to my friends. I fall more in the camp of having a working wireless network that meets my needs and keeps my family from yelling at me when the network is down.

Ubiquitous Usage

For the last five years my house has been running on Ubiquiti gear. You may recall I did a review back in 2018 after having it up and running for a few months. Since then I’ve had no issues. In fact, the only problem I had was not with the gear but with the machine I installed the controller software on. Turns out hard disk drives do eventually go bad and I needed to replace it and get everything up and running again. Which was my intention when it went down sometime in 2021. Of course, life being what it is I deprioritized the recovery of the system. I realized after more than a year that my wireless network hadn’t hiccuped once. Sure, I couldn’t make any changes to it but the joy of having a stable environment is that you don’t need to make constant changes. Still, I was impressed that I had no issues the necessitated my recovery of my controller software.

Flash forward to late 2023. I’m talking with some of the folks at Ubiquiti about a totally unrelated matter and I just happened to mention that I was impressed at how long the system had been running. They asked me what hardware I was working with and when I told them they laughed and said I needed to check out their new stuff. I was just about to ask them what I should look at when they told me they were going to ship me a package to install and try out.

Dreaming of Ease

Tom Hildebrand really did a great job because I got a UPS shipment at the beginning of December with a Ubiquiti Dream Machine SE, a new U6 Pro AP, and a G5 Flex Camera. As soon as I had the chance I unboxed the UDM SE and started looking over the installation process. The UDM SE is an all-in-one switch, firewall, and controller for the APs. I booted the system and started to do the setup process. I panicked for a moment because I realized that my computer was currently doing something connected to my old network and I didn’t want to have to dig through the pile to find a laptop to connect in via Ethernet to configure things.

That’s when my first surprise popped up. The UDM SE allows you to download the UniFi app to your phone and do the setup process from a mobile device. I was able to configure the UDM SE with my network settings and network names and get it staged and ready to do without the need for a laptop. That was a big win in my book. Lugging your laptop to a remote site for an installation isn’t always feasible. And counting on someone to have the right software isn’t either. How many times have you asked a junior admin or remote IT person what terminal program they’re using only to be met with a blank stare?

Once the UDM SE was up and running, getting the new U6 AP joined was easy. It joined the controller, downloaded the firmware updates and adopted my new (old) network settings. Since I didn’t have my old controller software handy I just recreated the old network settings from scratch. I took the opportunity to clean out some older compatibility issues that I was ready to be rid of thanks to an old Xbox 360 and some other ancient devices that were long ago retired. Clean implementations for the win. After the U6 was ready to go I installed it in my office and got ready to move my old AP to a different location to provide coverage.

The UDM SE detected that there were two APs that were running but part of a different controller. It asked me if I wanted to take them over and I happily responded in the affirmative. Sadly, when asked for the password to the old controller I drew a blank because that was two years ago and I can barely remember what I eat for breakfast. Ubiquiti has a solution for that and with some judicious use of the reset button I was able to reset the APs and join them to the UDM SE with no issues. Now everything is humming along smoothly. The camera is still waiting to be deployed once I figure out where I want to put it.

How is it all working? Zero complaints so far. Much like my previous deployment everything is humming right along and all my devices joined the new network without complaint. All the APs are running on new firmware and my new settings mean fewer questions about why something isn’t working because the kids are on a different network than the printer or one of the devices can’t download movies or something like that. Given how long I was running the old network without any form of control I’m glad it picked right up and kept going. Scheduling the right downtime at the beginning of the month may have had something to do with that but otherwise I’m trilled to see how it’s going.


Tom’s Take

Now that I’ve been running Ubiquiti for the last five years how would I rate it? I’d say for people that don’t want to rely on consumer APs from a big box store to run your home network you need to check Ubiquiti out. I know my friend Darrel Derosia is doing some amazing enterprise things with it in Memphis but I don’t need to run an entire arena. What I need is seamless connectivity for my devices without worry about what’s going to go down when I walk upstairs. My home network budget precludes enterprise gear. It fits nicely with Ubiquiti’s price point and functionality. Whether I’m trying to track down a lost Nintendo Switch or limit bandwidth so game updates aren’t choking out my cable modem I’m pleased with the performance and flexibility I have so far. I’m still putting the UDM SE through it’s paces and once I get the camera installed and working with it I’ll have more to say but rest assured I’m very thankful for Tom and his team for letting me kick the tires on some awesome hardware.

Disclaimer: The hardware mentioned in this post was provided by Ubiquiti at no charge to me. Ubiquiti did not ask for a review of their equipment and the opinions and perspectives represented in this post are mine and mine alone with not expectation of editorial review or compensation.