The Blogging Mirror

Writing isn’t always the easiest thing in the world to do. Coming up with topics is hard, but so too is making those topics into a blog post. I find myself getting briefings on a variety of subjects all the time, especially when it comes to networking. But translating those briefings into blog posts isn’t always straight forward. When I find myself stuck and ready to throw in the towel I find it easy to think about things backwards.

A World Of Pure Imagination

When people plan blog posts, they often think about things in a top-down manner. They come up with a catchy title, then an amusing anecdote to open the post. Then they hit the main idea, find a couple of supporting arguments, and then finally they write a conclusion that ties it all together. Sound like a winning formula?

Except when it isn’t. How about when the title doesn’t reflect the content of the post? Or the anecdote or lead in doesn’t quite fit with the overall tone? How about when the blog starts meandering away from the main idea halfway through with a totally separate argument? Or when the conclusion is actually the place where the lede is buried like the Ark of the Covenant?

All of these things are artifacts of the creative process. We often brainstorm great ideas halfway through the process and it derails our train of thought. That leads us down tangents we never intended to go down and create posts that aren’t thematic or even readable in some cases.

It happens all the time. In fact, even in writing this post I thought of a catchy title for a subject heading and had to move it when I was done because the heading didn’t fit the content of the section that followed. It’s okay to have the freedom to change that as soon as you see it. Provided you have a plan for the rest of the post. And that’s where the key here comes into play.

Strike That, Reverse It

I find the easiest way to plan a blog post is to actually write it in reverse. Instead of thinking about things from a top-down method, I start off by thinking about thinks bottom up. Literally.

  • Start From The End – It’s easiest to write the conclusion of your post first. After all, you’re just restating what you’ve been arguing or demonstrating in the post, right? So start with that. Use it as the main idea of your writing. Always refer back to it. If what you’ve typed doesn’t fit the tone of the conclusion, you either need to support it or cut it.
  • Support Your Conclusion – Now that you know what you’re going to be talking about, figure out how to support it. that means figuring out how to break your argument in to paragraphs and logical sections. Note that even though you’re trying to optimize for reading on screens today, you still need to follow basic structure. Paragraphs have multiple sentences that support the main idea. One you have two or three of those arguments, you’ve got support for your conclusion.
  • State The Topic – After you build your support for your conclusion then you can write the topic. After all, you just spent a lot of time spelling it all out. This paragraph at the top is where you state the purpose or theme of the post. Don’t worry about getting into too much detail here. That’s what the support is for. Your readers will get the idea by the time they get to the conclusion, which serves to wrap it all together.
  • Build Your Anecdote – If you are the type of writer that likes to open with an anecdote, much like a cold open in a drama, this is where you write it. Now that you’ve basically outlined the whole post you can tie your anecdote into the rest of the narrative. You don’t have to worry about building your discussion to support the really cool story. Because you’re adding the story at the end of the creative process you can guarantee that it’s going to fit.
  • Title Card – Now that you’ve written the post you can title it. This keeps you from making a title that doesn’t fit the narrative. It also allows the title to make a bit more sense in context. Either because you called the post something cute and catchy or because you made the most SEO optimized title in history to reap those sweet, sweet Google searches.

Tom’s Take

As you can see, posts are easier to write in reverse. When you think about things the opposite way from the restrictive methods of writing you’re much more free to express your creativity while also keeping yourself on track to make sure everything makes sense. Some people thrive in the realm of structure and can easily crank out a post from the top down. But when you find yourself stuck because you can’t tie everything together the right way try looking in a blogging mirror. The results will end up the same, but backwards might just be the way forward.

Advertisements

QoS Is Dead. Long Live QoS!

Ah, good old Quality of Service. How often have we spent our time as networking professionals trying to discern the archaic texts of Szigeti to learn how to make you work? QoS is something that seemed so necessary to our networks years ago that we would spend hours upon hours trying to learn the best way to implement it for voice or bulk data traffic or some other reason. That was, until a funny thing happened. Until QoS was useless to us.

Rest In Peace and Queues

QoS didn’t die overnight. It didn’t wake up one morning without a home to go to. Instead, we slowly devalued and destroyed it over a period of years. We did it be focusing on the things that QoS was made for and then marginalizing them. Remember voice traffic?

We spent years installing voice over IP (VoIP) systems in our networks. And each of those systems needed QoS to function. We took our expertise in the arcane arts of queuing and applied it to the most finicky protocols we could find. And it worked. Our mystic knowledge made voice better! Our calls wouldn’t drop. Our packets arrived when they should. And the world was a happy place.

That is, until voice became pointless. When people started using mobile devices more and more instead of their desk phones, QoS wasn’t as important. When the steady generation of delay-sensitive packets instead moved back to LTE instead of IP it wasn’t as critical to ensure that FTP and other protocols in the LAN interfered with it. Even when people started using QoS on their mobile devices the marking was totally inconsistent. George Stefanick (@WirelesssGuru) found that Wi-Fi calling was doing some weird packet marking anyway:

So, without a huge packet generation issue, QoS was relegated to some weird traffic shaping roles. Maybe it was video prioritization in places where people cared about video? Or perhaps it was creating a scavenger class for traffic in order to get rid of unwanted applications like BitTorrent. But overall QoS languished as an oddity as more and more enterprises saw their collaboration traffic moving to be dominated by mobile devices that didn’t need the old dark magic of QoS.

QoupS de Gras

The real end of QoS came about thanks to the cloud. While we spent all of our time trying to find ways to optimize applications running on our local enterprise networks, developers were busy optimizing applications to run somewhere else. The ideas were sound enough in principle. By moving applications to the cloud we could continually improve them and push features faster. By having all the bit off the local network we could scale massively. We could even collaborate together in real time from anywhere in the world!

But applications that live in the cloud live outside our control. QoS was always bounded by the borders of our own networks. Once a packet was launched into the great beyond of the Internet we couldn’t control what happened to it. ISPs weren’t bound to honor our packet markings without an SLA. In fact, in most cases the ISP would remark all our packets anyway just to ensure they didn’t mess with the ISP’s ideas of traffic shaping. And even those were rudimentary at best given how well QoS plays with MPLS in the real world.

But cloud-based applications don’t worry about quality of service. They scale as large as you want. And nothing short of a massive cloud outage will make them unavailable. Sure, there may be some slowness here and there but that’s nothing less than you’d expect to receive running a heavy application over your local LAN. The real genius of the cloud shift is that it forced developers to slim down applications and make them more responsive in places where they could be made to be more interactive. Now, applications felt snappier when they ran in remote locations. And if you’ve every tried to use old versions of Outlook across slow links you now how critical that responsiveness can be.

The End is The Beginning

So, with cloud-based applications here to stay and collaboration all about mobile apps now, we can finally carve the tombstone for QoS right? Well, not quite.

As it turns out, we are still using lots and lots of QoS today in SD-WAN networks. We’re just not calling it that. Instead, we’ve upgraded the term to something more snappy, like “Application Visibility”. Under the hood, it’s not much different than the QoS that we’ve done for years. We’re still picking out the applications and figuring out how to optimize their traffic patterns to make them more responsive.

The key with the new wave of SD-WAN is that we’re marrying QoS to conditional routing. Now, instead of being at the mercy of the ISP link to the Internet we can do something else. We can push bulk traffic across slow cheap links and ensure that our critical business applications have all the space they want on the fast expensive ones instead. We can push our out-of-band traffic out of an attached 4G/LTE modem. We can even push our traffic across the Internet to a gateway closer to the SaaS provider with better performance. That last bit is an especially delicious piece of irony, since it basically serves the same purpose as Tail-end Hop Off did back in the voice days.

And how does all this magical new QoS work on the Internet outside our control? That’s the real magic. It’s all tunnels! Yes, in order to make sure that we get our traffic where it needs to be in SD-WAN we simply prioritize it going out of the router and wrap it all in a tunnel to the next device. Everything moves along the Internet and the hop-by-hop treatment really doesn’t care in the long run. We’re instead optimizing transit through our network based on other factors besides DSCP markings. Sure, when the traffic arrives on the other side it can be optimized based on those values. However, in the real world the only thing that most users really care about is how fast they can get their application to perform on their local machine. And if SD-WAN can point them to the fastest SaaS gateway, they’ll be happy people.


Tom’s Take

QoS suffered the same fate as Ska music and NCIS. It never really went away even when people stopped caring about it as much as they did when it was the hot new thing on the block. Instead, the need for QoS disappeared when our traffic usage moved away from the usage it was designed to augment. Sure, SD-WAN has brought it back in a new form, QoS 2.0 if you will, but the need for what we used to spend hours of time doing with ancient tomes on knowledge is long gone. We should have a quiet service for QoS and acknowledge all that it has done for us. And then get ready to invite it back to the party in the form that it will take in the cloud future of tomorrow.

Silo 2: On-Premise with DevOps

I had a great time stirring up the hornet’s nest with the last post about DevOps, so I figured that I’d write another one with some updated ideas and clarifications. And maybe kick the nest a little harder this time.

Grounding the Rules

First, we need to start out with a couple of clarifications. I stated that the mantra of DevOps was “Move Fast, Break Things.” As has been rightly pointed out, this was a quote from Mark Zuckerberg about Facebook. However, as has been pointed out by quite a few people, “The use of basic principles to enable business requirements to get to production deployments with appropriate coordination among all business players, including line of business, developers, classic operations, security, networking, storage and other functional groups involved in service delivery” is a bit more of definition than motto.

What exactly is DevOps then? Well, as I have been educated, it’s a principle. It’s an idea. A premise, if you will. An ideal to strive for. So, to say that someone is on a DevOps team is wrong. There is no such thing as a classic DevOps team. DevOps is instead something that many other teams do in addition to their other jobs.

That being said, go ask someone what their job is in an organization. I’m willing to be that a lot of people will tell you their on the “DevOps Team”. I know this because some did a report, which I wrote about here and it includes responses from the “DevOps” team. Which, according to the classic definition, is wrong. Right?

Well, almost. See, this is where this tweet of mine comes into play:

“Pure” DevOps is hard to manage. It involves organizational shifts. It pisses people off because it’s hard to track metrics. You can’t track a person that does some traditional stuff and some of that new Dev-Op stuff. Where does that part of their job end up on a report? Putting someone in a team or a silo is almost as much for the purposes of managing that person as it is for them to do their job. If I put you in a silo, I know what you do. Or, at the very least, I can assign you tasks and responsibilities that you should be doing and grade you on those. If your “silo” is a principle and not a team, it’s crazy to grade the effectiveness of how you integrated with the developers to deliver services effectively. It can be tracked, but not as easily as a checkbox.

Likewise, people fear change. So, instead of putting their people into roles that cross functional barriers and reorganize the workflows, they instead just take the young people that are talking about the “new way” of doing things and put them in a team together. They slap a DevOps on the door and it’s done. We do DevOps now. Or, worse yet, they take the old infrastructure teams, move a few people off of them into a new team, and tell them to figure out what to do while they’re repainting the team name on the door. This has rightly been called “DevOps Washing” but a lot of people.

But what happens when that team starts Devving the Ops? Do they look at the enshrined principles of The Holy Book of DevOps and start trying to change organizational culture a little bit at a time to get the happy ending from The Phoenix Project? Do they eliminate the Brents of the world and give the security teams peace of mind?

Or, do they carve out their own little fiefdoms and start behaving like an integrated team with responsibilities and politics? Do they do things like deploy new projects to the cloud with little support from other teams. With the idea that they now “own” that workflow and can control how it’s used and how their team is viewed? If you read the article above with the report from Veriflow, you’ll find that a lot of organizations are seeing this second behavior.

Just as much as people fear proper change, they also get greedy in their new roles and want to be important to the business. And taking ownership of all the new initiatives, like cloud development, is a great way to be important. And, as much as The Phoenix Project preaches that security should be integrated into the DevOps workflow, you still half the 330 respondents to the above survey saying there is an increase in security threats to their new initiatives in public cloud.

Redefining DevOps

In a way, this “definition” of DevOps is like the title of this post. I’m sure more than a few of you bristled at the use of on-premise. Because, in today’s IT landscape we’re fighting a losing battle against a premise. When you refer to something as happening in a location, you say “on-premises”. If you say “on-premise”, you should be referring to an idea or concept. And yet, so many people in Silicon Valley say “on-premise” when referring to “on site” or “on location”. It’s grammatically wrong. But it sounds hip. It’s not the classical definition of the word and yet that word is slowly be redefined to mean what people are using it to mean. It literally happened with “literally”.

For those railing against the DevOps Washing that’s going on, ask yourself this question: Why? If the pure principles of DevOps are so much better and easier, why is everyone just slapping DevOps on existing teams or reforming other people into teams and running with the DevOps idea instead of following the rules as laid down by the sacred DevOps texts?

It could be that all organizations that are doing it this way are wrong. But are their more organizations doing it the proper way? Or is the lazy way more prevalent? I don’t know the answer, but given the number of products I see aimed at “the DevOps team” or the number of people that have given me feedback about how their organization’s DevOps teams display the same behaviors I talked about in my other blog post, I’d say there are more bad apples than purists out there.

So, what does this all mean for DevOps? Are we going to go on pointing and laughing at the DevOps-In-Name-Only crowd? Are we going to silently moan about how Real DevOps doesn’t happen and that we need to stay pure to the ideals? Or are we going to step back and realize that, just like every other technology or organizational shift that has ever occurred, nothing really gets implemented in its purest form? Instead of complaining that those not doing it the “proper” way are wrong, let’s examine why things get done the way they do and figure out how to fix it.

If businesses are implementing DevOps teams to execute the things they need done, find out why it has to be a dedicated team. Maybe they’re doing it wrong, or maybe they’ve stumbled across something that wasn’t included in the strictest definitions of DevOps. If people are giving work to those teams to accomplish and excluding other functional teams at the same time, don’t just wag your finger at them and tell them that’s not the “right way”. Find out what enabled that team to violate the ideas in the first place. Maybe the DevOps Team is responsible for all cloud deployments. Maybe they want some control over things instead of just a nebulous connection to an ideal.


Tom’s Take

DevOps in theory is a great thing. DevOps as presented in The Phoenix Project is a marvelous idea. But we all know that when theory meets reality, what we get is something different than we expected. It’s not unlike von Moltke’s famous quote, “No plan survives first contact with the enemy.” In theory, DevOps is pure and works like it should. But we’re seeing practice differing greatly from reality. The results are usually the same but the paths are radically different. And for the purists out there, if you don’t want DevOps to suffer the same fate as on-premise, you need to start asking yourself the same hard questions we are supposed to ask organizations as they start to deploy these ideas.

DevOps is a Silo

Silos are bad. We keep hearing how IT is too tribal and broken up into teams that only care about their swim lanes. The storage team doesn’t care about the network. The server teams don’t care about the storage team. The network team is a bunch of jerks that don’t like anyone. It’s a viscous cycle of mistrust and playground cliques.

Except for DevOps. The savior has finally arrived! DevOps is the silo-busting mentality that will allow us all to get with the program and get everything done right this time. The DevOps mentality doesn’t reinforce teams or silos. It focuses on the only pure thing left in the world – committing code. The way of the CI/CD warrior. But what if I told you that DevOps was just another silo?

Team Players

Before the pitchforks and torches come out, let’s examine why IT has been so tribal for so long. The silo mentality came about when we started getting more specialized with regards to infrastructure. Think about the original compute resources – mainframes. There weren’t any silos with mainframes because everyone pretty much had to know what they were doing with every part of the system. Everything was connected to the mainframe. The mainframe itself was the silo.

When we busted the mainframe apart and started down the road of client/server computing the hardware started becoming more specialized. Instead of one giant machine we had lots of little special machines everywhere. The more we deconstructed the mainframe, the more we needed to focus. The direct-attached storage became NAS and eventually SAN. The computer got bigger and bigger and eventually morphed into a virtualized hypervisor. The network exists to connect everything to the rest of the world, and as technology wore on the network became the transport for the infrastructure to talk to everything else.

Silos exist because you had to have specialized knowledge to operate your specialized infrastructure. Sure, there could be some cross training at lower levels or administration. Buy one you got into really complex topics like disk geometry optimization or route redistribution the ability for a layperson to understand what was going on was shot. Each silo exists to reinforce their own infrastructure. Each silo has their norms and their schedules. The storage team will never lose data. The network always has to be available.

Even as these silos got crammed together and subsumed into new job roles, the ideas behind them stayed consistent. Some of the storage admin’s job roles combined with the virtualization team to be some kind of a hybrid. The networking team has been pushed to adopt more agile development methodologies like automation and orchestration. Through it all, the silos were coming down as people pushed the teams to embrace more software focused on the infrastructure. That is, until DevOps burst onto the scene.

OpSilo

The DevOps tribe has a mantra: “Move Fast. Break Things. Ship. Ship. SHIP!” Maybe not those exact words but something very similar. DevOps didn’t come from mainframes. It didn’t even come from the early days of client/server. DevOps grew out of a time when everything was blown apart and on the verge of being moved into the cloud. These new DevOperators didn’t think about infrastructure as a team or a tribe. Instead, it was an impediment to shipping code.

When you work in software, moving fast and breaking things works. Because you’re pushing the limits of what you can do. You’re focused on features. You want new shiny things. Stability can wait as long as the next code commit is right around the corner. Who cares about what you’ve been doing.

In order to have the best experience with Software X, please turn on Automatic Updates so we can push the code as fast as our commits will allow.

Sound familiar? Who cares about disk geometry or route reflectors. Make my stuff work! Your infrastructure supports all my awesome code. I write the stuff that pays your salary. This place would be out of business if it wasn’t for me!

Granted that’s a little extreme, but the mentality is the same. Infrastructure exists to be consumed. IT is there to support the mission of Moving Fast, Breaking Things, and Shipping. It’s almost like a tribal behavior. Everyone has the same objective – ALL THE COMMITS!

Move fast and break things is the exact opposite of the storage and networking teams. You really don’t want to be screaming along at 800Mph when deploying a new SAN or trying to get iBGP stood up. You want careful. Calm. Collected. You’re working with a whole system that’s built on a house of cards. Unlike DevOps, breaking a thing in a SAN or on the edge of a network could impact the entire system, not just one chat module.

That’s why Networking and storage admins are so methodical. I harken back to some of my days in network engineering. When the network was running slow or the storage array was taxed, it took time to get data back. People were irritated but they got used to the idea of slowness. But if those systems ever went down, it was all-hands-on-deck panic! Contrast that with the mentality of the DevOps tribe. Who cares if it’s kind of broken right now? We need to ship the next feature or patch.

DevOps isn’t a silo buster. It’s just a different kind of tribal silo. The DevOps folks all have similar mentalities and view infrastructure in the same way. Cloud appeals to them because it minimizes infrastructure and gives them the tools they need to focus on developing. Cloud sprawl can easily happen when planning doesn’t occur. When specialized groups get together and talk about what they need, there is a reduction in consumed resources. Storage admins know how to get the most out of what they have. They don’t just spin up another bucket and keep deploying.


Tom’s Take

If you treat DevOps like a siloed tribe you’ll find their behavior is much easier to predict and work with. Don’t look at them as a cross-functional solution to all your problems. Even if you deploy all your assets to the cloud you’re going to need specialized teams to manage them once the infrastructure grows too big to manage by movement. Specialization isn’t the result of bad planning or tribalism. Instead, those specialized teams developed because of the need for deeper understanding. Just like DevOps developed out of a need to understand rapid deployment and fast-moving consumption of infrastructure. In time, the next “solution” to the DevOps problem will come along and we’ll find as well that it’s just another siloed team.

Atmosic and the Power of RF?

I recently talked to a company doing some very interesting things in the mobility space and I thought I’d take a stab at writing about them. Most of my mobility posts are about access points or controller software or me just complaining in general about the state of Wi-Fi 6. But this idea had me a little intrigued. And confused.

Bluetooth Moon Rising

Atmosic is a company that is focusing on low-power chips, especially for IoT applications. Most of their team came from Atheros, which you may recall powers a ton of the reference architectures used in wireless APs in many, many AP manufacturers that don’t make their own chips. Their team has the chops to make good wireless stuff one would think.

Atmosic wants to make IoT devices that use Bluetooth Low Energy (BLE). So far, this is sounding pretty good to me. I’ve seen a lot of crazy awesome ideas for BLE, like location tracking indoors or on-demand digital signage. Sure, there are some tracking issues that go along with that but it’s mostly okay. BLE is what the industry has decided to standardize on for a ton of IoT functionality.

How does Atmosic want to change things in the BLE space? Well, those Atheros chipset guys started out by building a chip that uses 5-10 times less power than before. That’s a staggering number when you think about it. BLE beacons already don’t use a ton of power. They’re designed to be used in concert with APs or with standalone, battery-powered devices. The BLE beacons I’ve seen from Aruba are about the size of the AirPods case. And that battery can last for a couple of years.

If Atmosic really did build a chip that can power those beacons with event 5x less power usage, you’re looking at a huge increase in the lifespan of a beacon! Imagine being able to deploy these things everywhere and have them run for a decade? You could literally cover a stadium or a hotel with them for next to nothing. Even if you included the chip in a new AP, which Atmosic is partnering to do, you could effectively run the BLE side of things for free from a power budget perspective.

This is something that is pretty big news. So why did I suddenly see things start sliding off the rails?

Unlimited Power!

The next part of the Atmosic pitch came when they told me about the the other part of their trinity of power savings. On their technology page, they tout the above mentioned chipset along with the special on-demand wake feature that allows the chips to be put into a deep sleep mode that will only awaken when it receives a special packet designed to rouse the chip like a custom alarm clock.

That third thing, though. Power harvesting. Now we’re starting to get into the real weeds of Wi-Fi stuff. Essentially, Atmosic is saying they can power their low-power BLE beacon indefinitely by harvesting power from RF in the air. Yeah, that’s right. They’re literally pulling nanoamps of power from remote power sources. Evidently, their power system is more reliable because they use known sources like 900MHz for coverage as opposed to just trying to pull the power from whatever happens to be around.

At this point, you’re probably saying one of two things:

  1. This is crap and it will never work.
  2. This is the most amazing thing ever!

Right now I tend to fall on the side of the first one. Why? Because if they really did invent a way to pull power from thin air, some really should cut them a check because they need to be building bigger, badder everythings! Imagine being able to power whatever you wanted without clunky batteries or power cords. It would be a revolution!

Sadly, the reality is that the Atmosic trinity pretty much requires all three parts to be so revolutionary. I talked to a couple of my friends in the wireless industry about this and Jonathan Davis (@SubNetwork) was about as skeptical as I was. Since he’s a real math wizard, he figured out that the amount of power being pulled in by the Atmosic chips through the air has to be pretty tiny. Like below nanoamps. And that’s not enough to run an active BLE beacon.

Building a Lower Powered Mousetrap

That’s where the whole system comes into play. It takes a very low power chip with a custom wake sensor (read: Passive Beacon) in order to be able to run on the kinds of power that you can draw from RF waves. And this is where the utility of the whole thing starts breaking down for me. Sure, you could do something crazy like put this on a piece of paper and “hide” it in the service tag of a piece of equipment like a laptop. Then you have a BLE that can track that device even if it’s powered down. But you still need a way to excite the BLE chip and make it wake up. And, at this point, if you’re doing passive Bluetooth is the solution really any better than a passive RFID tag that has the same lifespan? And is a lot cheaper?

The other issue that I have with this solution is the proposed longevity. Forever is the tag on the Atmosic site. For. Ev. Er. Sounds like a great idea in theory, right? Deploy a device in your network that can run forever on free energy and you never have to replace the batteries. Okay, that’s great. How old is your iPhone? Your laptop? Better yet, how old is the oldest piece of enterprise tech that you have on your person right now? I’d wager it can’t be more than 6 years old at this point.

Enterprises get chided for having old technology all the time. Maybe that laptop is 6 years old and still running. Perhaps those servers should have been decommissioned a refresh cycle ago. Compared to the mayfly lifespan of an iPhone, your average piece of enterprise tech is pretty long in the tooth. But not all Enterprise tech is that outdated. Take a look at wireless access points, for example. If you are running the oldest 802.11ac access point made it’s still just barely five years old, the standard having been ratified in December 2013. Most enterprises have already refreshed their 11ac Wave 1 APs. If they haven’t, they’re just holding off long enough for 802.11ax to maybe get certified this year so they can push out hot new hardware.

So, with 5-6 years as the standard for “old” technology in the enterprise, what on earth are we going to do with beacons that are a decade old? With the low-power chipset you’re already looking at a 5-7 lifespan on current battery technology if it really does deliver 5x power savings. Even current BLE beacons are designed with a short lifespan for a reason. Technology changes very fast. If you try and keep that device stuck the wall or a laptop for too long, it’s going to be out of sync with the rest of the tech around it.

Imagine trying to hook up a Bluetooth 2.x device to a current iPhone. It will work because the standards are there but it’s going to be painful because newer devices offer so much more functionality. Trying to keep devices around forever for the sake of doing it isn’t practical. And if you’re going to try and counter the argument by saying IoT devices can be around for quite a while you’re not going to win there either. Most IoT devices that are embedded for long term use wouldn’t use wireless or Bluetooth in the first place. They would be hardwired to cut down on potential points of failure. Sure, you might include something like this in the system, but it’s going to be powered enough already to not need to harvest power through the air.


Tom’s Take

I think the Atmosic people have the right idea for a baseline but their stretch goal is a bit lofty and sci-fi for my tastes. Sure, the idea of being able to harvest unlimited power from RF to run devices without batteries for years is great in theory. But technology demands for both enterprise tech and consumer/enterprise IoT devices is going to drive people to use the lowest common denominator of simplicity. I think that Atmosic has a lot of upside with these new super efficient chips. But I doubt we’re going to see anyone sucking power out of thin air any time soon.

Managing Automation – Fighting Fear of Job Justification

Dear Employees

 

We have decided to implement automation in our environment because robots and programs are way better than people. We will need you to justify your job in the next week or we will fire you and make you work in a really crappy job that doesn’t involve computers while we light cigars with dollar bills.

 

Sincerely, Management

The above letter is the interpretation of the professional staff of your organization when you send out the following email:

We are going to implement some automation concepts next week. What are some things you wish you could automate in your job?

Interpretations differ as to the intent of automation. Management likes the idea of their engineering staff being fully tasked and working on valuable projects. They want to know their people are doing something productive. And the people that aren’t doing productive stuff should either be finding something to do or finding a new job.

Professional staff likes being fully tasked and productive too. They want to be involved in jobs and tasks that do something cool or justify their existence to management. If their job doesn’t do that they get worried they won’t have it any longer.

So, where is the disconnect?

You Do Exist (Sort of)

The problem with these interpretations comes down to the job itself. Humans can get very good at repetitive, easy jobs. Assembly line works, quality testers, and even systems engineers are perfect examples of this. People love to do something over and over again that they are good at. They can be amazing when it comes to programming VLANs or typing out tweets for social media. And those are some pretty easy jobs!

Consistency is king when it comes to easy job tasks. When I can do it so well that I don’t have to think about things any more I’ve won. When it comes out the same way every time no matter how inattentive I am it’s even better. And if it’s a task that I can do at the same time or place every day or every week then I’m in heaven. Easy jobs that do themselves on a regular schedule are the key to being employed forever.

Automatic For The Programs

Where does that sound more familiar in today’s world? Why, automation of course! Automation is designed to make easy, repeatable jobs execute on a schedule or with a specific trigger. When that task can be done by a program that is always at work and never calls in sick or goes on vacation you can see the allure of it to management. You can also see the fear in the eyes of the professional that just found the perfect role full of easy jobs that can be scheduled on their calendar.

Hence the above interpretation of the automation email sample. People fear change. They fear automation taking away their jobs. Yet, the jobs that are perfect for automation are the kinds of things that shouldn’t be jobs in the first place. Professionals in a given discipline are much, much smarter than just doing something repetitively over and over again like VLAN modifications or IP addressing of interfaces. More importantly, automation provides consistency. Automation can be programmed to pull from the correct database or provide the correct configuration every time without worry of a transcription mistake or data entry in the wrong field.

People want these kinds of jobs because they afford two important things: justification of their existence and free time at work. The former ensures they get to have a paycheck. The latter gives them the chance to find some kind of joy in their job. Sure, you have some kind of repetitive task that you need to do every day like run a report or punch holes in a sheet of metal. But when you’re not doing that task you have the freedom to do fun stuff like learn about other parts of your job or just relax.

When you take away the job with automation, you take away the cover for the relaxation part. Now, you’ve upset the balance and forced people to find new things to do. And that means learning. Figuring out how to make tasks easy and repetitive again. And that’s not always possible. Hence the fear of automation and change.

Building A Better Path To Automation

How do we fix this mess? How can we motivate people to embrace automation? Well, it’s pretty simple:

  1. Help Your Team See The Need – If your teams think think they’re going to lose their jobs because of automation, they’re not going to embrace it. You need to show them that not only are they not going to lose their jobs but how automation will make their jobs easier and better. Remember to frame your arguments along the lines of removing mistakes and not needing to worry about justifying your existence in a role. That should encourage everyone to look for new challenges to overcome.
  2. Show the Value – This goes with the first part somewhat, but more than showing the need for automation with mistake reduction or schedule easing, you also need to show value. If a person has never made a mistake or has built their schedule around repetitive tasks they are going to hate automation. Show them what they can do now that their roles don’t have to focus on the old stuff they did. Help them look at where they can provide additional value. Even if it starts off by monitoring the automation platform to make sure it’s executing correctly. Maybe the value they can provide is finding new things to automate!
  3. Embrace the Future – Automation allows people to learn how to do new things. They can focus on new skills or roles that help support the business in a better way. More automation means more complexity to understand but also a chance for people to shine in new roles. The right people will see a challenge as something to be overcome. Help them set new goals. Help them get where they want to be. You’ll be surprised how quickly they will get there with the right leadership.

Tom’s Take

Automation isn’t going to steal jobs. It will force people to examine their tasks and decide how important they really are. The people that were covering their basic roles and trying to skate by are going to leave no matter what. Even if your automation push fails these marginal people are going to leave for greener pastures thanks to the examination of what they’re actually doing. Don’t let the pushback discourage you in the short term. Automation isn’t the goal. Automation is the tool to get you to the true goal of a smoother, more responsive team that accomplishes more and can reacher higher goals.

Certifications Are About Support

You may have seen this week that VMware has announced they are removing the mandatory recertification requirement for their certification program. This is a huge step from VMware. The VCP, VCAP, and VCDX are huge certifications in the virtualization and server industry. VMware has always wanted their partners and support personnel to be up-to-date on the latest and greatest software. But, as I will explain, the move to remove the mandatory recertification requirement says more about the fact that certifications are less about selling and more about supporting.

The Paper Escalator

Recertification is a big money maker for companies. Sure, you’re spending a lot money on things like tests and books. But those aren’t usually tied to the company offering the certification. Instead, the testing fees are given to the testing center, like Pearson, and the book fees go to the publisher.

The real money maker for companies is the first-party training. If the company developing the certification is also offering the training courses you can bet they’re raking in the cash. VMware has done this for years with the classroom requirement for the VCP. Cisco has also started doing in with their first-party CCIE training. Cisco’s example also shows how quality first-party content can drive out the third parties in the industry by not even suggesting to prospective candidates that this is another option to get their classroom materials.

I’ve railed against the VCP classroom requirement before. I think forcing your candidates to take an in-person class as a requirement for certification is silly and feels like it’s designed to make money and not make good engineers. Thankfully, VMware seems to agree with me in the latest release of info. They’re allowing the upgrade path to be used for their recertification process, which doesn’t necessarily require attendance in a classroom offering. I’d argue that it’s important to do so, especially if you’re really out of date with the training. But not needing it for certification is a really nice touch.

Keeping the Lights On

The other big shift with this certification change from VMware is the tacit acknowledgement that people aren’t in any kind of rush to upgrade their software right after the newest version is released. Ask any system administrator out there and they’ll tell you to wait for a service pack before you upgrade anything. System admins for VMware are as cautious as anyone, if not moreso. Too often, new software updates break existing functionality or cause issues that can’t be fixed without a huge time investment.

How is this affected by certification? Well, if I spent all my time learning VMware 5.x and I got my VCP on it because my company was using it you can better believe that my skill set is based around VCP5. If my company doesn’t decide to upgrade to 6.x or even 7.x for several years, my VCP is still based on 5.x technology. It shouldn’t expire just because I never upgraded to 6.x. The skills that I have are focused on what I do, not what I’m studying. If my company finally does decide to move to 6.x, then I can study for and receive my VCP on that version. Not before.

Companies love to make sure their evangelists and resellers are all on the latest version of their certifications because they see certifications as a sales tool. People certified in a technology will pick that solution over any others because they are familiar with it. Likewise, the sales process benefits from knowledgable sales people that understand the details behind your solution. It’s a win-win for both sides.

What this picture really ignores is the fact that a larger number of non-reseller professionals are actually using the certification as a study guide to support their organization. Perhaps they get certified as a way to get better support terms or a quicker response to support calls. Maybe they just learned so much about the product along the way that they want to show off what they’ve been doing. No matter what the reason, it’s very true that these folks are not in a sales role. They’re the support team keeping the lights on.

Support doesn’t care about upgrading at the drop of a hat. Instead, they are focused on keeping the existing stuff running as long as possible. Keeping users happy. Keeping executives happy. Keeping people from asking questions about availability or services. That’s not something that looks good on a bill of materials. But it’s what we all expect. Likewise, support isn’t focused on new things if the old things keep running. Certification, for them, is more about proving you know something instead of proving you can sell something.


Tom’s Take

I’ve had so many certifications that I don’t even remember them all. I got some of them because we needed it to sell a solution to a customer. I got others to prove I knew some esoteric command in a forgotten platform. But, no matter what else came up, I was certified on that platform. Windows 2000, NetWare 6.x, you name it. I was certified on that collection of software. I never rushed to get my certification upgraded because I knew what the reality of things really was. I got certified to keep the lights on for my customers. I got certified to help the people that believed in my skills. That’s the real value of a certification to me. Not sales. Just keeping things running another month.