The Silver Lining of Cisco Live

Cisco Live 2022 Attendees by the big sign

Cisco Live was last week and it was an event full of both relief and worry. Having not seen any of my friends and colleagues during the Geek Summer Camp for since 2019 I was excitedly anticipating how things would go this year. While I was thrilled to see everyone in real life again there were also challenges that presented themselves by the end of the event that we need to discuss as well.

I could spend volumes detailing every little thing that went on but no one really wants to read that kind of discussion. I’ll just summarize some the stuff that I liked, some of it that I didn’t, and some bigger things that everyone needs to think about.

What Worked for Me

I was happy to once more be a part of the CCIE Advisory Council. We have been meeting via Webex for the entire pandemic but there’s just something about being in a room together that fosters conversation and sharing. The ideas that we discussed are going to have a positive impact on the program as we look at what the future of certifications will be. There’s a lot more to this topic than I can cover in just a quick summary paragraph.

I was a bit confused about the Social Media Hub hours on Sunday, so I resurrected my original tweet about meeting people right outside registration:

I had lots of people stop by that morning and say hello. It warmed my heart to see everyone before the conference even started. Thankfully, the Cisco Live social team came out to tell me that you could get to the Social Media Hub even though the show floor wasn’t open yet. I went in and grabbed a comfy chair to await the opening tweetup.

The tweetup itself was a good one. Lots of new faces means lots of people that are getting introduced to the social side of Cisco. That means the community is going to continue to grow and prosper. One point of weirdness for me was when people would introduce me to their friends and such by pointing at the Social Media Hub and saying, “Tom is the reason we have all this.” While that’s technically true it still makes me feel weird because the community it was keeps driving Cisco Live forward. No one person defines it for everyone else.

I enjoyed the layout of the World of Solutions this year because I wasn’t packed in with everyone else elbowing my way through crowded alleys trying to visit a booth. It felt like Cisco put some thought into having ample space for people to spread out instead of trying to maximize space usage. I know that this is partially a result of the COVID pandemic (which we’ll cover more of in a bit) but I wouldn’t be sad to see this layout stick around for a few more years. Less crowded means better conversations.

The keynote was fun for me, mostly because of where I enjoyed watching it. We put together a watch party for the Tech Field Day Extra delegates and it was more fun than I realized. We were able to react live to the presentation without fear of making a calamitous noise in the arena. I had forgotten how much fun the MST3K style of keynote commentary could be.

Lastly, the social media team knocked it out of the park. They were on top of the tweets and answering questions throughout the event. I have some issues with the social media stuff in general but the team did a top notch job. They were funny and enjoyed bantering back and forth with everyone. Social media is hard and doing it as a job is even harder. I just hope we didn’t scar anyone with our tweets.

What I Was Concerned About

Not everything is perfect at events. As someone that runs them for a living I can tell you little things go wrong all the time and need to be addressed. Here are some of the things that happened that made me take a few notes.

The communication across the whole event felt a bit rushed. Like certain things were announced at the last minute or were only announced in certain places. Nailing down the best way to share information is always difficult but when in doubt you need to share it everywhere. If you have access to social media, email, digital signage, and other avenues use them all. It’s better to overshare and remove doubt than undershare and end up fielding questions anyway.

There was some grumbling about the way that some of the social media aspects were handled this year. I think that sentence gets typed every year. Some of it comes down to the focus that stakeholders want to put on certain aspects of the event. If they want more video content that’s going to favor folks that are comfortable recording videos. If they want more long-form written items that naturally prioritizes those that are good at writing. No one is ever going to find the perfect mix but, again, communication is key. If we know what you want to see we can help make more of it.

The other thing that annoys me a bit, specifically about Las Vegas, is the land rush of sponsored parties. On Monday evening I was walking back to the Luxor to my room to drop my backpack and more than half the restaurants and bars I walked past all had banners out front and signs stating they were closed for a special event or booked until a certain time. While I appreciate that the sponsors of the event are willing to go out of their way to spend money and entice attendees to go to their party and hear about how awesome their products are it also creates an artificial crunch for other things. If half the bars are closed then the other half have to pick up the remainder of the hungry people. That means that a half-full exclusive party causes a two-hour wait at a restaurant next door. While this is nothing new in the conference community the lack of other options at the south end of the Las Vegas strip means you’re pretty much stuck taking a taxi to another hotel if you don’t want to wait to have a burger or pizza. In full transparency one of those parties on Monday was one that I attended for the Cisco Champions program but there were also two other parties booked in Ri Ra that night concurrently.

What We Should All Be Asking

Now it’s time for the elephant in the convention center. The reason why we haven’t had an in-person Cisco Live in three years is COVID. We were locked down during the pandemic and conference organizers erred on the side of caution in 2021. 2022 was a hopeful year and many conferences were back to being live events. RSA happened in San Francisco the week before Cisco Live. There were thousands of people there and a reported 16,000 people at Cisco Live.

The reports coming out of Cisco Live were that a lot of people tested positive for COVID after returning home. Cisco had a strict policy of requiring proof of vaccination to attend. Yet people were testing positive as early as Sunday before the conference even started. The cases started rising throughout the week and by the time folks got home on Thursday evening or Friday my Twitter feed was full of friends and colleagues that came back with the extra strength conference crud.

Thankfully no one has been seriously affected as of this writing. Most everyone that I spoke with has said they feel like they have a cold and are tired but are powering through and should be clear to leave quarantine at home by today. I, amazingly, managed to avoid getting infected. I tested every day and each time it came back clear. I’m not actually sure how I managed to do that, as I wasn’t wearing a mask like I really should have been and I was around people for most of the day. I could attribute it to luck but the logical side of my brain says it’s more likely that I caught it sometime in May and didn’t realize it so my body had the latest antibody patch to keep me from coming down with it.

Between RSA and Cisco Live there are a lot of people asking questions about how in-person conferences of size are going to happen in the future with COVID being a concern. RSA was tagged as a “super spreader” event. Cisco Live is on the verge of being one as well. There are lots of questions that need to be asked. Can a conference ensure the safety of the attendees? Are there measures that should be mandatory instead of encouraged? What value do we get from face-to-face interaction? And will the next event see fewer people now that we know what happens when we get a lot of them in one place?


Tom’s Take

I could go on and on about Cisco Live but the important thing is that it happened. No last minute cancelations. No massive outbreaks leading to serious health problems. We all went and enjoyed the event, even if the result was coming home to quarantine. I went fully expecting to get infected and I didn’t. Maybe I should have done it a little differently but I think a lot of people are saying the same thing now. I hope that Cisco and other companies are encouraged by the results and continue to have in-person events going forward. Not everyone is going to attend for a variety of reasons. But having the option to go means building back the community that has kept us going strong through difficult times. And that’s a reason to see a silver lining.

Practice Until You Can’t Get It Wrong

One of the things that I spend a lot of my time doing it teaching and training. Not the deeply technical stuff like any one of training programs out there or even the legion of folks that are doing entry-level education on sites like Youtube. Instead, I spend a lot of my time bringing new technologies to the fore and discussing how they impact everyone. I also spend a lot of time with youth and teaching them skills.

One of the things that I’ve learned over the years is that it’s important to not only learn something but to reinforce it as well. How much we practice is just as important as how we learn. We’re all a little guilty of doing things just enough to be proficient without truly mastering a skill.

Hours of Fun

You may have heard of the rule proposed by Malcolm Gladwell that it takes 10,000 hours to become an expert at something. There’s been a lot of research debunking this “rule of thumb”. In fact it turns out that the way you practice and your predisposition to how you learn has a lot do to with the process as well.

When I’m teaching youth, I see them start a new skill and keep going until they get their first success. It could be tying a knot or setting up a tent or some other basic skill. Usually, with whatever it is, they get it right and then decide they are proficient in the skill. And that’s the end of it until they need to be tested on it or something forces them to use it later.

For me, the proficiency aspect of basic skills is maddening. We teach people to do things like tying knots or programming switch ports but we don’t encourage them to keep practicing. We accept that proficiency is enough. Worse yet, we hope that the way they will gain expertise is by repetition of the skill. We don’t set the expectation of continued practice.

That’s where the offhanded Gladwell comment above really comes from. The length of time may have been completely arbitrary but the reality is that you can’t really master something until you’ve done it enough that the skill becomes second nature. Imagine someone riding a bicycle for the first time. If they stopped when they were able to pedal the bike they’d never be able to ride it well enough to maneuver in traffic.

Likewise, we can’t rely on simple proficiency for other tasks either. If we just accept that an operations person just learns VLAN configuration once and then we hope they’ll know it well enough that they can do it again later we’re going to either be frustrated when they have to keep looking up the commands for the task or, worse yet, when they bring down the network because they didn’t remember that you needed to use the add keyword on a trunk port and they wipe out a chunk of the network core.

Right vs. Wrong

For all those reasons above I ask my students to take things a little further. Rather than just doing something until you have an initial success I ask them to practice until they have it ingrained into their motor pathways. Put more simply:

Don’t practice until you get it right. Practice until you can’t get it wrong.

The shift in thinking helps people understand the importance of repeated practice. Getting it right is one thing. Understanding all the possible ways something can be done or every conceivable situation is something entirely different. Sure, you can configure a VLAN. Can you do it on every switch? Do you know what order the commands need to be done in? What happens if you switch them? Do you know what happens when you enable two contradictory features?

Obviously there are things you’re not going to need to practice this much all the time. One of my favorites is the people in CCIE study groups that spend way more time working on things like BGP leak maps or the various ways that one could configure QOS on a frame relay circuit. Are these important things to know? Yes. Are they more important to know than basic layer 2/3 protocols or the interactions of OSPF and BGP when redistributing? No.


Tom’s Take

When I was younger, I watched the Real Ghostbusters cartoon. One of the episodes featured Winston asking Egon if he could read Summerian. His response? “In my sleep, underwater, and in the dark. Of course I can read Summerian.”

Practice the basics until you understand them. Don’t miss a beat and make sure you have what you need. But don’t stop there. Keep going until you can’t possibly forget how to do something. That’s how you know you’ve mastered it. In your sleep, underwater, and in dark. Practice until you can’t get it wrong.

Friday Thoughts Pre-Cisco Live

It’s weird to think that I’m headed out to Cisco Live for the first time since 2019. The in-person parts of Cisco Live have been sorely missed during the pandemic. I know it was necessary all around but I didn’t realize how much I enjoyed being around others and learning from the community until I wasn’t able to do it for an extended period of time.

Now we’re back in Las Vegas and ready to take part in something that has been missed. I’ve got a busy lineup of meetings with the CCIE Advisory Council and Tech Field Day Extra but that doesn’t mean I’m not going to try and have a little fun along the way. And yes, before you ask, I’m going to get the airbrush tattoo again if they brought the artist back. It’s a tradition as old as my CCIE at this point.

What else am I interested in?

  • I’m curious to see how Cisco responds to their last disappointing quarter. Are they going to tell us that it was supply chain? Are they going to double down on the software transition? And how much of the purchasing that happened was pull through? Does that mean the next few quarters are going to be down for Cisco?
  • With the drive to get more and more revenue from non-hardware sources, where does that leave the partners of Cisco? Is there still a space for companies to create solutions to work with aspects that Cisco doesn’t do well? Or will they find themselves in a Sherlock situation eventually?
  • I’m cautiously optimistic that a successful conference will mean more of the same going forward. I know there’s going to be reports of COVID coming out of Vegas because there’s no way to have a group that big together and be 100% free. If there is a big issue with rates skyrocketing after a combined two weeks of RSA and Cisco Live it will force companies to rethink a return to conference season.

Tom’s Take

If you’re in Vegas next week I hope to see you around! Even if it’s just in passing in the hallway stop me and say hello. It’s been far too long since we’ve interacted as a group and I want to hear how things have been going. For those that aren’t going to go make sure to stay tuned for my recap.

The Tyranny of Technical Debt, Numerically

A Candlestick Phone (image courtesy of WIkipedia)

This week on the Gestalt IT Rundown, I talked about the plan by Let’s Encrypt to reuse some reserved IP address space. I’ve talked about this before and I said it was a bad idea then for a lot of reasons, mostly related to the fact that modern operating systems are coded not to allow 240/4 as a valid address space, for example. Yes, I realize that when the address space was codified back in the early days of the Internet that decisions were made to organize things and we “lost” a lot of addresses for experimental reasons. However, this is not the only time this has happened. Nor is it the largest example. For that, we need to talk about the device that you’re very likely reading this post on right now: your phone.

By the Numbers

We’re going to be referring to the North American Numbering Plan (NANP) in this post, so my non-US readers are going to want to click that link to understand how phone numbering works in the US. The NANP was devised back in the 1940s by AT&T as a way to assign numbers to the telephone exchanges so they were easy to contact. The first big decision that was made was to disallow a 1 or a 0 digit at the start of the prefix code. On a pulse dialing system that sends electrical pulses instead of the tones we associate with DTMF the error rate for having a 1 as the first digit was pretty high. It was generally ignored by the equipment. A 0 was used as the signal to a switchboard operator to get assistance. So the decision was made to restrict the 1 and the 0 for technical reasons. That’s the first technical debt choice that was made.

Next, the decision was made to make it easier for the CO switchboard operators to memorize numbers using mnemonics. That meant assigning letters to the numbers on the phone rotary dial (later keypad). Every number got three letters, leaving out Q and Z because they’re weird. Every number, that is, except 1 and 0. Because they weren’t used as the start of a prefix code they didn’t get letters assigned as noted above. Every telephone exchange was named based on the numerical prefix, which could have a 0 or a 1 in the first two digits. For example, the exchanges that started with 47x would be named using a letter from the 4 followed by a digit from the 7. Those words started with GR, such as “GRanite”, GReenwood”, and so on. it was handy to remember those as long as the telephone system didn’t get too big. If you ever watch an old movie from the 50s and hear someone asking to talk to a person whose number is KLondike5-6000 that’s why. We’ll come back to KLondike in a minute. The use of words instead of number was slowly phased out starting in 1958 because it was just easy for people to use nothing but numbers.

So what about dialing outside of your local exchange? Well, for that you need an area code right? But that area code looks just like a prefix code. How can you tell someone that you want to dial a different state? Well, AT&T came up with an interesting rule there. Remember how no prefix code started with a 0 or a 1? AT&T said that area codes MUST have a 0 or a 1 as the middle number. That organization allowed for the switchboard operator or signaling equipment to know that the first three digit code was an area code, followed by seven digits for the prefix and handset number. Simple, right? Combined with using the 1 digit to signal a long-distance interchange call you had a system that worked well for a number of years. Almost fifty, in fact, from 1947 all the way to 1995.

What happened in 1995? The telephone administrators realized they were going to run out of room given the explosion of phone numbers. The phone companies, now more than just AT&T, realized all the way back in the 1960s that growth would eventually lead to them needing to do away with the rules about restricting the middle digit of an area code to a 1 or a 0. That’s why area codes created after 1995 have middle digits in the 2-9 range. Thanks to things like mobile phones we doubled or tripled the amount of phone numbers we were going to need. Which meant tearing up all those careful plans about how to use the numbers that were created back in the 1940s to solve technical challenges.

To The Klondike

What about that KLondike number I mentioned earlier? Well, KL is 55 on the dial/keypad, so a KLondike5 number actually starts with 555. Most movie buffs will tell you that any number that has a 555 in the prefix code is a fake number since that’s what you hear in movies all the time. Originally these number were assigned to local exchanges only and used for testing purposes, except for 555-1212 which is a universal directory assistance number in every area code. In 1994, people realized that there were a huge amount of numbers that could be added to the NANP pool by reclaiming those test numbers.

However, if someone gave you a 555 number in a business setting or at a bar or club, how likely would you be to say that it was a fake or invalid number? Likely pretty high given the preponderance of usage in film. In fact, 555-0100 through 555-0199 were still set aside for “fake” number or entertainment purposes, not unlike 192.0.2.0/24 being reserved for documentation purposes.

The 555-XXXX number range is the perfect example of why a plan like Let’s Encrypt and their suggestion to reclaim the space is going to ultimately fail. You’re going to have to do a significant amount of programming to get every operating system to not immediately reject 240/4 as invalid or not immediately assume 127/8 is a loopback address. Because those decisions were made long ago, relatively speaking, and are going to take time and effort to undo. Remember that AT&T knew they were going to need to change area code rules back in the 1960s and it still took 30 years to put it into practice.

Moreover, the plans by Let’s Encrypt and others seeking to implement the Schoen draft in the IETF are ignoring a very simple truth. If we’re going to spend all this time and energy rewriting half the networking stacks in use today, why aren’t we using that energy to implement IPv6 support universally instead? It’s already a requirement if you interface with the US federal government. Why are we going to spend so much time fixing a problem we know is broken and not scalable in the future? Is it because we’re holding on to some idea from the past the IP addresses should be four octets? Is it because we want to exhaust every possible resource before being dragged into the IPv6 future? Or is it because after all these years we don’t want to admit that a better solution exists and we’re just ready to move to the next version of IP after IPv6 and we don’t want to say it out loud?


Tom’s Take

Technical debt is the result of decisions we made at the time to do what needed to be done. It’s not malicious or even petty. It’s usually what had to happen and now we have to live with it. Instead of looking at technical debt as a crushing weight we need to decide how to best fix it when necessary. Like the NANP changes we can modify things to make them work better. However, we also need to examine how much work we’re doing to perpetuate something that needs to be rewritten and why we choose do it the way we do. Phone numbers are something that are going to persist for along time. IP addresses aren’t. Let’s fix the problem the right way instead of the comfortable way.

Mind the Air Gap

I recently talked to some security friends on a CloudBytes podcast recording that will be coming out in a few weeks. One of the things that came up was the idea of an air gapped system or network that represents the ultimate in security. I had a couple of thoughts that felt like a great topic for a blog post.

The Gap is Wide

I can think of a ton of classical air gapped systems that we’ve seen in the media. Think about Mission: Impossible and the system that holds the NOC list:

Makes sense right? Totally secure unless you have Tom Cruise in your ductwork. It’s about as safe as you can make your data. It’s also about as inconvenient as you can make your data too. Want to protect a file so no one can ever steal it? Make it so no one can ever access it! Works great for data that doesn’t need to be updated regularly or even analyzed at any point. It’s offline for all intents and purposes.

Know what works great as an air gapped system? Root certificate authority servers. Even Microsoft agrees. So secure that you have to dig it out of storage to ever do anything that requires root. Which means you’re never going to have to worry about it getting corrupted or stolen. And you’re going to spend hours booting it up and making sure it’s functional if you ever need to do anything with it other than watch it collect dust.

Air gapping systems feels like the last line of defense to me. This data is so important that it can never be exposed in any way to anything that might ever read it. However, the concept of a gap between systems is more appealing from a security perspective. Because you can create gaps that aren’t physical in nature and accomplish much of the same idea as isolating a machine in a cool vault.

Gap Insurance

By now you probably know that concepts like zero-trust network architecture are the key to isolating systems on your network. You don’t remove them from being accessed. Instead, you restrict their access levels. You make users prove they need to see the system instead of just creating a generic list. If you consider the ZTNA system as a classification method you understand more how it works. It’s not just that you have clearance to see the system or the data. You also need to prove your need to access it.

Building on these ideas of ZTNA and host isolation are creating virtual air gaps that you can use effectively without the need to physically isolate devices. I’ve seen some crazy stories about IDS sensors that have the transmit pairs of their Ethernet cables cut so they don’t inadvertently trigger a response to an intruder. That’s the kind of crazy preparedness that creates more problems that it solves.

Instead, by creating robust policies that prevent unauthorized users from accessing systems you can ensure that data can be safely kept online as well as making it available to anyone that wants to analyze it or use it in some way. That does mean that you’re going to need to spend some time building a system that doesn’t have inherent flaws.

In order to ensure your new virtual air gap is secure, you need to use policy manual programming. Why? Because a well-defined policy is more restrictive than just randomly opening ports or creating access lists without context. It’s more painful at first because you have to build the right policy before you can implement it. However, a properly built policy can be automated and reused instead of a just pasting lines from a text document into a console.

If you want to be sure your system is totally isolated, start with the basics. No traffic to the system. Then you build out from there. Who needs to talk to it? What user or class of users? How should they access it? What levels of security do you require? Answering each of those questions also gives you the data you need to define and maintain policy. By relying on these concepts you don’t need to spend hours hunting down user logins or specific ACL entries to lock someone out after they leave or after they’ve breached a system. You just need to remove them from the policy group or change the policy to deny them. Much easier than sweeping the rafters for super spies.


Tom’s Take

The concept of an air gapped system works for spy movies. The execution of one needs a bit more thought. Thankfully we’ve graduated from workstations that need biometric access to a world where we can control the ways that users and groups can access things. By thinking about those systems in new ways and treating them like isolated islands that must be accessed on purpose instead of building an overly permissive infrastructure in the first place you’ll get ahead of a lot of old technical debt-laden thinking and build a more secure enterprise, whether on premises or in the cloud.

Broadening Your Horizons, or Why Broadcom Won’t Get VMware

You might have missed the news over the weekend that Broadcom is in talks to buy VMware. As of right now this news is still developing so there’s no way of knowing exactly what’s going to happen. But I’m going to throw my hat into the ring anyway. VMware is what Broadcom really wants and they’re not going to get it.

Let’s break some of this down.

Broad Street

Broadcom isn’t just one of the largest chip manufactures on the planet. Sure, they make networking hardware that goes into many of the products you buy. Yes, they do make components for mobile devices and access points and a whole host of other things, including the former Brocade fibre channel assets. So they make a lot of chips.

However, starting back in November 2018, Broadcom has been focused on software acquisitions. They purchased CA Technologies for $19 billion. They bought Symantec the next year for $10 billion. They’re trying to assemble a software arm to work along with their hardware aspirations. Seems kind of odd, doesn’t it?

Ask IBM how it feels to be the dominant player in mainframes. Or any other dominant player in a very empty market. It’s lonely and boring. And boring is the exact opposite of what investors want today. Mainframes and legacy computing may be the only thing keeping IBM running right now. And given that Broadcom’s proposed purchase of Qualcomm was blocked a few years ago you can see that Broadcom is likely at the limit of what they’re going to be able to do with chipsets.

Given the explosion of devices out there you’d think that a chip manufacturer would want to double and triple down on development, right? Especially given the ongoing chip shortage. However, you can only iterate on those chips so many times. There’s only so much you can squeeze before you run out of juice. Ask Intel and AMD. Or, better yet, see how they’ve acquired companies to diversify into things like FPGAs and ARM-based DPUs. They realize that CPUs alone aren’t going to drive growth. There have to be product lines that will keep the investor cash flowing in. And that doesn’t come from slow and steady business.

Exciting and New

VMware represents a huge potential business arena for Broadcom. They get to jump into the data center with both feet and drive hybrid cloud deployments. They can be a leader in the on-prem software market overnight with this purchase. Cloud migrations take time. Software needs to be refactored. And that means running it in your data center for a while until you can make it work in the cloud. You could even have software that is never migrated for technical or policy-based reasons.

However, that very issue is going to cause problems for VMware and Broadcom. Is there a growth market in the data center in an enterprise? Do you see companies adding new applications and resources to their existing enterprise data centers? Or do you see them migrating to the cloud first? Do you imagine given the choice between building more compute cluster in the existing hybrid data center or developing for the cloud the first time that companies are going to choose the former?

To me, the quandary of VMware isn’t that different from the one faced by IBM. Yes, you can develop new applications to run on mainframes or on-prem data centers. But you can also not do that too. Which means you have to persuade people to use your technology. Maybe it’s for ease-of-management for existing operations teams. Could be an existing relationship that allows you to execute faster. But no matter what choice you make you’re incurring some form of technical debt when you choose to use existing technology to do something that could also be accomplished with new ideas.

Know who else hates the idea of technical debt and slow steady growth? That’s right, investors. They want flashy year-over-year numbers every quarter to prove they’re right. They want to see that stock price climb so they can eventually sell it and invest in some other growth market. Gone are the days when people were proud to own a bit of company and see it prosper over time. Instead, investors have a vision that lasts about 90 days and is entirely focused on the bottom line. If you aren’t delivering growth you don’t have value. It’s not even about making money any more. You have to make more all the time.

The Broad Elephant

The two biggest shareholders of VMware would love to see it purchased for a ton of money. Given the current valuation north of $40 billion, Dell and Silver Lake would profit handsomely from a huge acquisition. That could be used to pay down even more debt and expand the market for Dell solutions. So they’re going to back it if the numbers look good.

The other side of this equation is the rest of the market that thinks Broadcom’s acquisition is a terrible idea. Twitter is filled with analysts and industry experts talking about how terrible this would be for VMware customers. Given that Symantec and CA haven’t really been making news recently would tend to lend credence to that assessment.

The elephant in the room is what happens when customers don’t want VMware to sell? Sure, if VMware is allowed to operate as an independent entity like it was during the EMC Federation days things are good. They’ll continue to offer support and make happy customers. However, there is always the chance something will change down the road and force the status quo to change. And that’s the thing no one wants to talk about. Does VMware reject a very good offer in favor of autonomy? They just got out from under the relationship with Dell Technologies that was odd to say the least. Do they really want to get snapped up right away?


Tom’s Take

My read on this is simple but likely too simple. Broadcom is going to make a big offer based on stock price so they can leverage equity. The market has already responded favorably to the rumors. Dell and Silver Lake will back the offer because they like the cash to change their leverage situation. Ultimately, regulators will step in and decide the deal. I’m betting the regulators will say “no” like they did with Qualcomm. When the numbers are this big there are lots of eyeballs on the deal.

Ultimately Broadcom may want VMware and the biggest partners may be on board with it. But I think we’re going to see it fall apart because the approvals will either take too long and the stock price will fall enough to make it no longer worth it or the regulators will just veto the deal outright. Either way we’re going to be analyzing this for months to come.

Quality To Be Named Later

programming

First off, go watch this excellent video from Ken Duda of Arista at Networking Field Day 28. It’s the second time he’s knocked it out of the park when it comes to talking about code quality:

One of the things that Ken brings up in this video that I thought would be good to cover in a bit more depth is the idea of what happens to the culture of your organization, specifically code quality, when you acquire a new company. Every team goes through stages of development from formation through disagreement and finally to success and performance. One of the factors that can cause a high-performing team to regress back to a state of challenges is adding new team members to the group.

Let’s apply this lesson to your existing code infrastructure. Let’s say you’ve spent a lot of time building the best organization that has figured out and your dev teams are running like a well-oiled machine. You’re pushing out updates left and right and your users are happy. Then, you buy a company to get a new feature or add some new blood to the team. What happens when that new team comes on-board? Are they going to integrate into what you’ve been doing all this time? Do they have their own method for doing development? Are they even using the same tools that you have used? How much upheaval are you in for?

Buying Your Way To Consistency

Over a decade ago, United Airlines bought Continental Airlines. Two years later, the companies finally merged their ticketing systems. Well, merged might be a bit of a stretch. United effectively moved all their reservations to the Continental system and named the whole thing United. There are always challenges with these kinds of integrations but one might think that part of the reason for the acquisition was to move to a more modern reservation system.

United’s system was called Apollo and built in the 1970s. How could they move to a more modern system? Was the reason for the huge purchase of another airline directly related to their desire to adopt a newer, more flexible reservation system? There have certainly been suggestions of that for a number of years. But, more importantly, they also saw the challenges faced by one of their Star Alliance partner US Airways in 2007 when they tried a different approach to merging booking systems. The two radically different code bases clashed and created issues. And that’s for something as simple as an airline reservation system!

In the modern day we have much more control over the way that code is developed. We know the process behind what we do and what we write. When we build something we control it the entire way. However, that is true of everyone that writes code. And even with a large number of “best practices” out there no two developers are going to approach the problem the same way unless they work for the same company. So when you bring someone on board to your team through acquisition you’re also bringing in their processes, procedures, and habits. You have to own what they do because now their development quirks are part of your culture.

Bracing For the Impact

There’s a lot of due diligence that happens when companies are purchased. There’s an army of accountants that pore over the books and find all the potential issues. I’d argue that any successful merger in today’s world also needs to include a complete and thorough code review as well. You need to know how their culture is going to integrate into what you’re doing. Never mind the intellectual property issues. How do they name variables? Do they call the same memory allocation routines? Do they use floats instead of integers because that’s how they were taught? What development tools do they use and can those tools adapt to your workflow?

It may sound like I’m being a bit pedantic when I talk about variable naming being a potential issue but when it comes to code that is not the case. You’re going to have to train someone in procedure and you need to know who that is before they start committing code to your codebase. Those little differences are going to create bugs. Those bugs are going to creep into what you’re working on and create even more problems. Pretty soon something as simple as trying to translate from one IDE to another is going to create a code quality problem. That means your team is going to spend hours solving an issue that could have been addressed up front by figuring out how things were done in two different places. If you think that’s crazy, remember NASA lost a satellite over a unit conversion problem.


Tom’s Take

You may never find yourself in the shoes of someone like Ken Duda. He’s committed to quality software and also in charge of trying to integrate acquisitions with his existing culture. However, you can contribute to a better software culture by paying attention to how things are done. If you do things a certain way you need to document everything. You need to ensure that if someone new comes into the team that they can understand your processes quickly. That way they don’t spend needless hours troubleshooting problems that were lost in translation at the start of the process. Do the hard work up front so you aren’t calling people names later.

Friday Thoughts on the Full Stack

It’s been a great week at Networking Field Day 28 this week with some great presentations and even better discussions outside of the room. We recorded a couple of great podcasts around some fun topics, including the Full Stack Engineer.

Some random thoughts about that here before we publish the episode of the On-Premise IT Roundtable in the coming weeks:

  • Why do you need a full stack person in IT? Isn’t the point to have people that are specialized?
  • Why does no one tell the developers they need to get IT skills? Why is it more important for the infrastructure team to learn how to code?
  • We see full stack doctors, which are general practitioners. Why are there no full stack lawyers or full stack accountants?
  • If the point of having a full stack understanding is about growing non-tech skills why not just say that instead?
  • There’s value in having someone that knows a little bit about everything but not too much. But that value is in having them in a supervisor role instead of an operations or engineering role. Do you want the full stack doctor doing brain surgery? or do you want him to refer you to a brain surgeon?

Tom’s Take

Note that I don’t have all the answers and there are people that are a lot smarter than me out there that have talked a lot about the full stack engineering issue. My biggest fear is that this is going to become another buzzword just like 10x engineer that carries a lot of stigma about being less than useful while being a know-it-all. When the episode of the podcast comes out perhaps it will generate some good discussion on how we should be handling it.

Ease of Use or Ease of Repair

HammerAndSaw

Have you tried to repair a mobile device recently? Like an iPad or an MacBook? The odds are good you’ve never even tried to take one apart, let alone put in replacement parts. Devices like these are notorious to try and repair because they aren’t designed to be fixed by a normal person.

I’ve recently wondered why it’s so hard to repair things like this. I can recall opening up my old Tandy Sensation computer to add a Sound Blaster card and some more RAM back in the 90s but there’s no way I could do that today, even if the devices were designed to allow that to happen. In my thinking, I realized why that might be.

Build to Rebuild

When you look at the way that car engine bays were designed in the 80s and 90s you might be surprised to see lots of wasted space. There’s room to practically crawl in beside the engine and take a nap. Why is that? Why waste all that space? Well, if you’re a mechanic that wants to get up close and personal with some part of the engine you want all the space you can find. You’d rather waste a little space and not have busted knuckles.

A modern engine isn’t designed to be repaired by amateurs. The engine may be sideways or have replaceable parts put in places that aren’t easily accessed without specialized equipment. The idea is that professionals would have everything they need to remove the engine if necessary. Moreover, by making it harder for amateurs to try and fix the issues you’re are preventing them from doing something potentially problematic that you’ll have to fix later.

The same kind of thing applies to computing equipment. Whereas before you could open the case of a computer and try to fix the issues inside with a reasonable level of IT or electrical engineering knowledge you now find yourself faced with a compact device that needs specialized tools to open and even more specialized knowledge to fix. The average electrical engineer is going to have a hard time understanding the modern system-on-a-chip (SoC) designs when you disassemble your phone.

Usability Over Uniqueness

The question of usability is always going to come into play when you talk about devices. The original IBM PC was a usable design for the time it was built compared to mainframes. The iPhone is a usable design for the time when it was built as well. But compared to each other, which is the more usable design today?

Increasing usability often comes at the cost of flexibility. I was talking to someone at a fabrication lab about printer drivers for a laser etching machine. He said that installing the drivers for the device on Windows is easy but almost impossible on a Mac. I chuckled because the difficulty of installing a non-standard printer on a Mac is part of the reason why it’s easy to use for other kinds of printers. The printing subsystem traded usability with things like Bonjour printer installation over flexibility, like installing strange printers on a laptop.

Those trade offs extend into the hardware. Anyone that worked on PCs prior to Windows 95 knows all about fun things like IRQ conflicts, DIP switch settings for COM port addresses, and even more esoteric problems. We managed to get rid of those in the modern computing era but along with that went the ability to customize the device. You no longer have to do the dance of finding an available IRQ but you also can’t install a device that would conflict with another through hardware trickery.

By creating devices for the average user focused on usability you have to sacrifice the flexibility that makes them easier to fix. If you knew you were constantly going to need to be tweaking the jets on a carburetor you’d make it easy to access, right? But if a modern fuel injection system never needs to be adjusted except by a specialized professional why would you make it accessible to anyone that doesn’t have tools? You can see this in systems that use proprietary screws to keep users out of the system or glue parts together to prevent easy access.


Tom’s Take

I miss the ability to open up my laptop and fix issues with the add-in cards. The tinkerer in me likes to learn about how a system works. But I don’t miss the necessity to go in and do it. The usability of a system that “just works” is way more useful to me. It reminds me of my dad in some ways. He may have loved the ability to open up the engine and play with the carburetor but he also had to do it frequently to make things “work right”. In some ways, removing our ability to repair things in the IT space has forced manufacturers to build devices that don’t need to be frequently repaired. It’s not the optimal solution for sure, but the trade off it more than worth it in my mind.

In Defense of Subscriptions

It’s not hard to see the world has moved away from discrete software releases to a model that favors recurring periodic revenue. Gone are the days of a bi-yearly office suite offering or a tentpole version of an operating system that might gain some features down the road. Instead we now pay a yearly fee to use the program or application and in return we get lots of new things on a somewhat stilted cadence.

There are a lot of things to decry about software subscription models. I’m not a huge fan of the way that popular features are put behind subscription tiers that practically force you to buy the highest, most expensive one because of one or two things you need that can only be found there. It’s a callback to the way that cable companies put their most popular channels together in separate packages to raise the amount you’re paying per month.

I’m also not a fan of the way that the subscription model is a huge driver for profits for investors. If your favorite software program doesn’t have a subscription model just yet you’d better hope they never take a big investment. Because those investors are hungry for profit. Profit that is easier to see on a monthly basis because people are paying a fee for the software monthly instead of having big numbers pop up every couple of years.

However, this post is about the positive aspects of software subscriptions. So let’s talk about a couple of them.

Feature Creatures

If you’ve ever complained that an application or a platform doesn’t have a feature that you really want to see you know that no software is perfect. Developers either put in the features they think you want to see or they wait for you to ask for them and then race to include them in the next release. This push-pull model dominates the way that traditional software has been written.

How do those features get added to software? They do have to be written and tested and validated before they’re published. That takes time and resources. You need a team to add the code and tools to test it. You need time to make it all work properly. In other words, a price must be paid to make it happen. In a large organization, adding those features can be done if there is sufficient proof that adding them will recoup the costs or increase sales, therefore justifying their inclusion.

How about if no one wants to pay? You don’t have to look far for this one. Most of the comments on a walled garden app store are constantly talking about how “this app would be great if it had this feature” or “needs to do “X” then it’s perfect!” but no one wants to pay to have those features developed. If the developer is already charging a minimum for the app or giving it away in hope of making up the difference through ad revenue or something, how can we hope to encourage them to write the feature we want if we’re not willing to pay for it?

This is also the reason why you’re seeing more features being released for software outside of what you might consider a regular schedule. Instead of waiting for a big version release to pack in lots of new features in the hopes of getting customers to pay for the upgrade developers are taking a different approach. They’re adding features as they are finished to show people the value of having the subscription. This is similar to the model of releasing the new episodes of a series on a streaming service weekly instead of all at once. You get people subscribed for longer periods instead of doing it only long enough to get the feature they’re looking for. Note that this doesn’t cover licensing that you must hold to use the feature.

Respect for the Classics

One other side effect of subscription licensing is support for older hardware. You may think this sounds counterintuitive. However, the likelihood that an older model of hardware is going to be supported later in life is due to the fact that there is no longer a need to force upgrades to justify hardware development.

Yes, there are always going to be releases that don’t support older hardware. If it’s something that needs to use a new chip or a new hardware feature it’s going to be very difficult to export that back to a platform that doesn’t have the hardware and never will. It’s also harder to code the feature in such as way as to run hardware that has varying degrees of support and implementation capabilities. The fact that an iPhone 7 and an iPhone 13 can run the same version of iOS is a huge example of this.

More to the point, the driver to force hardware upgrades is less impactful. If you’re writing a new app or new piece of software it’s tempting to make the new features run only on the latest version of the hardware. That practically forces the users to upgrade to take advantage of the new kit and new apps. It feels disingenuous because the old hardware may still have had a few more years of life in it.

Another factor here is that the added revenue from the subscription model allows developers to essentially “pay” to support those older platforms. If you’re getting money from customers to support the software you have the resources you need to port it to older version as well as continuing to support those older platforms with things like security updates. You may not get your favorite switch supported for a decade now but you also don’t have to feel like you’re being forced to upgrade every three years either.


Tom’s Take

I go back and forth with subscription models. Some tools and platforms need them. Those are the ones that I use every day and make lots of sense to me to keep up with. Others feel like they’re using the lure of more frequent feature releases to rope users into paying for things and making it difficult to leave. When you couple that with the drive of investors to have regular recurring revenue it puts companies into a tough spot. You need resources to build things and you need regular revenue to have resources. You can’t just hope that what you’re going to release is going to be popular and worry about the rest later. I don’t know that our current glut of subscription models is the answer to all our issues. However, it’s not the most evil thing out there.