Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Don’t Touch My Mustache, Aruba!

dont-touch-my-mustache

It’s been a year since Aruba Networks became Aruba, a Hewlett-Packard Enterprise Company. It’s  been an interesting ride for everyone involved so far. There’s been some integration between the HPE Networking division and the Aruba teams. There’s been presentations and messaging and lots of other fun stuff. But it all really comes down to the policy of non-interference.

Don’t Tread On Me

HPE has done an admirable job of keeping their hands off of Aruba. It sounds almost comical. How many companies have acquired a new piece and then done everything possible to integrate it into their existing core business? How many products have had their identity obliterated to fit in with the existing model number structure?

Aruba isn’t just a survivor. It’s come out of the other side of this acquisition healthy and happy and with a bigger piece of the pie. Dominick Orr didn’t just get to keep his company. Instead, he got all of HPE’s networking division in the deal! That works out very well for them. It allows Aruba to help integrate the HPE networking portfolio into their existing product lines.

Aruba had a switching portfolio before the acquisition. But that was just an afterthought. It was designed to meet the insane requirements of the new Gartner Wired and Wireless Magic Quadrant. It was a product pushed out to meet a marketing need. Now, with the collaboration of both HPE and Aruba, the combined business unit has succeeded in climbing to the top of the mystical polygon and assuming a leading role in the networking space.

Could you imagine how terrible it would have been if instead of taking this approach, HPE had instead insisted on integration of the product lines and renumbering of everything? What if they had insisted that Aruba engineers, who are experts in their wireless field, were to become junior to the HPE wireless teams? That’s the kind of disaster that would have led to the fall of HPE Networking sooner or later. When good people get alienated in an acquisition, they flee for the hills as fast as their feet will carry them. One look at the list of EMC and VMware departures will tell you the truth of that.

You’re Very Welcome

The other thing that makes it an interesting ride is the way that people have reacted to the results of the acquisition. I can remember seeing how folks like Eddie Forero (@HeyEddie) were livid and worried about how the whole mess was going to fall apart. Having spoken to Eddie this week about the outcome one year later, he seems to be much, much more positive than he was in the past. It’s a very refreshing change!

Goodwill is something that is very difficult to replace in the community. It takes ages to earn and seconds to destroy. Acquiring companies that don’t understand the DNA of the company they have acquired run the risk of alienating the users of that solution. It’s important to take stock of how you are addressing your user base and potential customers regularly after you bring a new business into the fold.

HPE has done a masterful job of keeping Aruba customers happy by allowing Aruba to keep their communities in place. Airheads is a perfect example. Aruba’s community is a vibrant place where people share knowledge and teach each other how to best utilize solutions. It’s the kind of place that makes people feel welcome. It would have been very easy for HPE to make Airheads conform to their corporate policies and use their platforms for different purposes, such as a renewed focus on community marketing efforts. Instead, we have been able to keep these resources available to all to keep a happy community all around.


Tom’s Take

The title above is actually holds a double meaning. You might think it refers to keeping your hands off of something. But “don’t touch my mustache” is a mnemonic phrase to help people remember the Japanese phrase do itashimashite which means “You’re Welcome”.

Aruba has continued to be a leader in the wireless community and is poised to make waves in the networking community once more because HPE has allowed it to grow through a hands-off policy. The Aruba customers and partners should be very welcome that things have turned out as they have. Given the graveyard of failed company acquisitions over they years, Aruba and HPE are a great story indeed.

Slacking Off

A Candlestick Phone (image courtesy of WIkipedia)

A Candlestick Phone (image courtesy of WIkipedia)

There’s a great piece today on how Slack is causing disruption in people’s work habits. Slack is a program that has dedicated itself to getting rid of email, yet we now find ourselves mired in Slack team after Slack team. I believe the real issue isn’t with Slack but instead with the way that our brains are wired to handle communication.

Interrupt Driven

People get interrupted all the time. It’s a fact of life if you work in business, not just IT. Even if you have your head down typing away at a keyboard and you’ve closed out all other forms of distraction, a pop up from an email or a ringing or vibrating phone will jar your concentration out of the groove and force your brain to deal with this new intruder into your solitude.

That’s evolution working against you. When we were hunters and gatherers our brain had to learn how to deal with external threats when we were focused on a task like stalking a mammoth or looking for sprouts on the forest floor. Our eyes are even developed to take advantage of this. Your peripheral vision will pick up movement first, followed by color, then finally it can discern the shape of an object. So when your email notifier slides out from the system tray or notification window it triggers your primitive need to address the situation.

In the modern world we don’t hunt mammoths or forage for shoots any longer. Instead, our survival instinct has been replaced by the need to answer communications as fast as possible. At first it was returning phone calls before the end of the day. Then it became answering emails expediently. That changed into sending an immediate email response that you had seen the email and were working on a response. Then came instant messaging for corporate environments and the idea of “presence”, which allows everyone to know what you’re doing and when you’re doing it. Which has led us to ever presence – the idea that we’re never really not available.

Think about the last time you saw someone was marked unavailable in a chat window and you sent the message anyway. Perhaps you thought they would see the message the next time they logged in or returned to their terminal. Or perhaps you guessed that they had set their status as away to avoid distraction. Either way, the first thought you had was that this person wasn’t really gone and was available.

Instant messaging programs like Slack bridge the gap because synchronous communications channels like phone calls and asynchronous channels like email. In the past, we could deal with phone calls because it required the attention of both parties involved. A single channel was opened and it was very apparent that you were holding a conversation, at least until the invention of call waiting. On the other hand, email is asynchronous by nature because we can put all of our thoughts down in a single message over the course of minutes or even hours and send it into the void. Reliable delivery ensures that it will make it to the destination but we don’t know when it will be read. We don’t know when the response will come or in what form. The receiving party may not even read your message!

The Need to Please

Think back to the last time you responded to an email. How often did you start your response with “Sorry for the delay” or some version of that phrase? In today’s society, we’ve become accustomed to instant responses to things. Amy Lewis (@CommsNinja) is famous for having an escalation process for reaching her:

  1. Text message
  2. Twitter DM
  3. Email
  4. Phone Call
  5. Anything else
  6. Voice mail

She prefers instant communication and rapid response. In a lot of cases, this is very crucial. If you need an answer to a question quickly there are ways to reach people for immediate reply. But the desire to have immediate response for all forms of communication is a bit much.

Our brains don’t help us in this matter. When we get an email or a communication from someone, we feel compelled to respond to it. It’s like a checkbox that needs to be checked. And so we will drop everything else to work on a reply even if it means we’re displeasing someone for a short time to please someone immediately.

Many of the time management systems that have been created to deal with massive email flows, such as GTD are centered on the idea of dealing with things as the come in and pigeonholing them until they can be dealt with appropriately. By treating everything the same you disappoint everyone equally until everything can evaluated. There are cutouts for high priority communications, but the methods themselves tell you to keep those exceptions small and rare so as not to disrupt the flow of things.

The key to having coherent and meaningful conversations with other people is the same online as it is in person. Rather than speaking before you think, you should take the time to consider your thoughts and respond with appropriately measured words. It’s easier to do this via email since there is built-in delay but it works just the same in instant message conversations as well. An extra minute of thought won’t make someone angry with you, but not taking that extra minute could make someone very cross with you down the road.


Tom’s Take

I agree with people that say Slack is great for small teams spread everywhere to help categorize thoughts and keep projects on track. It takes away the need for a lot of status update emails and digests of communications. It won’t entirely replace email for communications and it shouldn’t be seen that way. Instead, the important thing to realize about programs like Slack is that they will start pushing your response style more toward quick replies with little information. You will need to make a conscious decision to push back a bit to make measured responses to things with more information and less response for the sake of responding. When you do you’ll find that instant messaging tools augment your communications instead of complicating them.

Drowning in the Data of Things

DrowningSign

If you saw the news coming out of Cisco Live Berlin, you probably noticed that Internet of Things (IoT) was in every other announcement. I wrote about the impact of the new Digital Ceiling initiative already, but I think that IoT is a bit deeper than that. The other thing that seems to go hand in hand with discussion of IoT is big data. And for most of us, that big data is going to be a big problem.

Seen And Not Heard

Internet of Things is about dumb devices getting smart. Think Flowers for Algernon. Only now, instead of them just being smarter they are also going to be very talkative too. The amount of data that these devices used to hold captive will be unleashed on something. We assume that the data is going to be sent to a central collection point or polled from the device by an API call or a program that is mining the data for another party. But do you know who isn’t going to be getting that data? Us.

IoT devices are going to be talking to providers and data collection systems and, in a lot of cases, each other. But they aren’t going to be talking directly to end users or IT staff. That’s because IoT is about making devices intelligent enough to start making their own decisions about things. Remember when SDN came out and everyone started talking about networks making determinations about forwarding paths and topology changes without human inputs? Remember David Meyer talking about network fragility?

Now imagine that’s not the network any more. Imagine it’s everything. Devices talking to other devices and making decisions without human inputs. IoT gives machines the ability to make a limited amount of decisions based on data inputs. Factory floor running a bit too hot for the milling machines to keep up? Talk to the environmental controls and tell it to lower the temperature by two degrees for the next four hours. Is the shelf in the fridge where the milk is stored getting close to the empty milk jug weight? Order more milk. Did a new movie come out on Netflix that meets your viewing guidelines? Add that movie to the queue and have the TV turn on when your phone enters the house geofence.

Think about those processes for moment. All of them are fairly easy conditional statements. If this, then do that. But conditional statements aren’t cut and dried. They require knowledge of constraints and combinations. And all that knowledge comes from data.

More Data, More Problems

All of that data needs to be collected somehow. That means transport networks are going to be stressed now that there are ten times more devices chatting on them. And a good chunk of those devices, especially in the consumer space, are going to be wireless. Hope your wireless network is ready for that challenge. That data is going to be transported to some data sink somewhere. As much as we would like to hope that it’s a collector on our network, the odds are much better that it’s an off-site collector. That means your WAN is about to be stressed too.

How about storing that data? If you are lucky enough to have an onsite collection system you’d better start buying drives for it now. This is a huge amount of data. Nimble Storage has been collecting analytics data from their storage arrays for a while now. Every four hours they collect more data than there are stars in the Milky Way. Makes you wonder where they keep it? And how long are they going to keep that data? Just like the crap in your attic that you swear you’re going to get around to using one day, big data and analytics platforms will keep every shred of information you want to keep for as long you want to have it taking up drive space.

And what about security? Yeah, that’s an even scarier thought. Realize that many of the breaches we’ve read about in the past months have been hackers having access to systems for extended periods of time and only getting caught after they have exfiltrated data from the system. Think about what might happen if a huge data sink is sitting around unprotected. Sure, terabytes worth of data may be noticed if someone tries to smuggle it out of the DLP device. But all it takes is a quick SQL query against the users tables for social security numbers, a program to transpose those numbers into letters to evade the DLP scanner, and you can just email the file to yourself. Script a change from letters back to numbers and you’ve got a gold mine that someone left unlocked and lying around. We may be concentrating on securing the data in flight right now, but even the best armored car does no good if you leave the bank vault door open.


Tom’s Take

This whole thing isn’t rain clouds and doom and gloom. IoT and Big Data represent a huge challenge for modern systems planning. We have the ability to unlock insight from devices that couldn’t tell us their secrets before. But we have to know how deep that pool will be before we dive in. We have to understand what these devices represent before we connect them. We don’t want our thermostats DDoSing our home networks any more than we want the milling machines on the factory floor coming to life and trying to find Sarah Connor. But the challenges we have with transporting, storing, and securing the data from IoT devices is no different than trying to program on punch cards or figure out how to download emails from across the country. Technology will give us the key to solve those challenges. Assuming we can keep our head above water.

 

Will Cisco Shine On?

Digital Lights

Cisco announced their new Digital Ceiling initiative today at Cisco Live Berlin. Here’s the marketing part:

And here’s the breakdown of protocols and stuff:

Funny enough, here’s a presentation from just three weeks ago at Networking Field Day 11 on a very similar subject:

Cisco is moving into Internet of Things (IoT) big time. They have at least learned that the consumer side of IoT isn’t a fun space to play in. With the growth of cloud connectivity and other things on that side of the market, Cisco knows that is an uphill battle not worth fighting. Seems they’ve learned from Linksys and Flip Video. Instead, they are tracking the industrial side of the house. That means trying to break into some networks that are very well put together today, even if they aren’t exactly Internet-enabled.

Digital Ceiling isn’t just about the PoE lighting that was announced today. It’s a framework that allows all other kinds of dumb devices to be configured and attached to networks that have intelligence built in. The Constrained Application Protocol (CoaP) is designed in such a way as to provide data about a great number of devices, not just lights. Yet lights are the launch “thing” for this line. And it could be lights out for Cisco.

A Light In The Dark

Cisco wants in on the possibility that PoE lighting will be a huge market. No other networking vendor that I know of is moving into the market. The other building automation company has the manufacturing chops to try and pull off an entire connected infrastructure for lighting. But lighting isn’t something to take lightly (pun intended).

There’s a lot that goes into proper lighting planning. Locations of fixtures and power levels for devices aren’t accidents. It requires a lot of planning and preparation. Plan and prep means there are teams of architects and others that have formulas and other knowledge on where to put them. Those people don’t work on the networking team. Any changes to the lighting plan are going to require input from these folks to make sure the illumination patterns don’t change. It’s not exactly like changing a lightbulb.

The other thing that is going to cause problems is the electrician’s union. These guys are trained and certified to put in anything that has power running to it. They aren’t just going to step aside and let untrained networking people start pulling down light fixtures and put up something new. Finding out that there are new 60-watt LED lights in a building that they didn’t put up is going to cause concern and require lots of investigation to find out if it’s even legal in certain areas for non-union, non-certified employees to install things that are only done by electricians now.

The next item of concern is the fact that you now have two parallel networks running in the building. Because everyone that I’ve talked to about PoE Lighting and Digital Ceiling has had the same response: Not On My Network. The switching infrastructure may be the same, but the location of the closets is different. The requirements of the switches are different. And the air gap between the networks is crucial to avoid any attackers compromising your lighting infrastructure and using it as an on-ramp into causing problems for your production data network.

The last issue in my mind is the least technically challenging, but the most concerning from the standpoint of longevity of the product line – Where’s the value in PoE lighting? Every piece of collateral I’ve seen and every person I’ve heard talk about it comes back to the same points. According to the experts, it’s effectively the same cost to install intelligent PoE lighting as it is to stick with traditional offerings. But that “effective” word makes me think of things like Tesla’s “Effective Lease Payment”.

By saying “effective”, what Cisco is telling you is that the up-front cost of a Digital Ceiling deployment is likely to be expensive. That large initial number comes down by things like electricity cost savings and increased efficiencies or any one of another of clever things that we tell each other to pretend that it doesn’t cost lots of money to buy new things. It’s important to note that you should evaluate the cost of a Digital Ceiling deployment completely on its own before you start taking into account any kind of cost savings in an equation that come months or years from now.


Tom’s Take

I’m not sure where IoT is going. There’s a lot of learning that needs to happen before I feel totally comfortable talking about the pros and cons of having billions of devices connected and talking to each other. But in this time of baby steps toward solutions, I can honestly say that I’m not entirely sold on Digital Ceiling. It’s clever. It’s catchy. But it ultimately feels like Cisco is headed down a path that will lead to ruin. If they can get CoAP working on many other devices and start building frameworks and security around all these devices then there is a chance that they can create a lasting line of products that will help them capitalize on the potential of IoT. What worries me is that this foray into a new realm will be fraught with bad decisions and compromises and eventually we’ll fondly remember Digital Ceiling as yet another Cisco product that had potential and not much else.

The Myth of Chargeback

 

Cash Register

Cash register by the National Cash Register Co., Dayton, Ohio, United States, 1915.

Imagine a world where every aspect of a project gets charged correctly. Where the massive amount of compute time for a given project gets labeled into the proper department and billed correctly. Where resources can be allocated and associated to the projects that need them. It’s an exciting prospect, isn’t it? I’m sure that at least one person out there said “chargeback” when I started mentioning all these lofty ideas. I would have agreed with you before, but I don’t think that chargeback actually exists in today’s IT environment.

Taking Charge

The idea of chargeback is very alluring. It’s been on slide decks for the last few years as a huge benefit to the analytics capabilities in modern converged stacks. By collecting information about the usage of an application or project, you can charge the department using that resource. It’s a bold plan to change IT departments from cost centers to revenue generators.

IT is the red headed stepchild of the organization. IT is necessary for business continuity and function. Nothing today can run without computers, networking, or phones. However, we aren’t a visible part of the business. Much like the plumbers and landscapers around the organization, IT’s job is to make things happen and not be seen. The only time users acknowledge IT is when something goes wrong.

That’s where chargeback comes into play. By charging each department for their usage, IT can seek to ferret out extraneous costs and reduce usage. Perhaps the goal is to end up a footnote in the weekly management meeting where Brian is given recognition for closing a $500,000 deal and IT gets a shout-out for figuring out marketing was using 45% more Exchange server space than the rest of the organization. Sounds exciting, doesn’t it?

In theory, chargeback is a wonderful way to keep departments honest. In practice, no one uses it. I’ve talked to several IT professionals about chargeback. About half of them chuckled when I mentioned it. Their collective experience can best be summarized as “They keep talking about doing that around here but no one’s actually figured it out yet.”

The rest have varying levels of implementation. The most advanced ones that I’ve spoken to use chargeback only for physical assets in a project. If Sales needs a new server and five new laptops for Project Hunter, then those assets are charged back correctly to the department. This keeps Sales from asking for more assets than they need and hoping that the costs can be buried in IT somewhere.

No one that I’ve spoken to is using chargeback for the applications and software in an organization. We can slice the pie as fine as we want for how to allocate assets that you can touch but when it comes to figuring out how to make Operations pay their fair share of the bill for the new CRM application we’re stuck. We can pull all the analytics all day long but we can’t seem to get them matched to the right usage.

Worse yet, politics plays a big role in chargeback. If a department head disagrees with the way their group is being characterized for IT usage, they can go to their superiors and talk about how critical their operation is to the business and how they need to be able to work without the restrictions of being billed for their usage. A memo goes out the next day and suddenly the department vanishes from the records with an admonishment to “let them do their jobs”.

Cloud Charges

The next thing that always comes up is public cloud. Chargeback proponents are waiting for wide-spread adoption of public cloud. That’s because the billing method for cloud is completely democratic. Everyone pays the price no matter what. If an AWS instance is running someone needs to pay for it. If those systems can be isolated to a specific application or department then the chargeback takes care of itself. Everyone is happy in the end. IT gets to avoid blame for not producing and the other departments get their resources.

Of course, the real problem comes when the bills start piling up. Cloud isn’t cheap. It exposes the dirty little secret that sunk-cost hardware has a purpose. When you bill based on CPU hour you’ll find that a lot of systems sit idle. Management will come unglued trying to figure out how cloud costs so much. The commercials and sales pitches said we would save money!

Then the politics start all over again. IT gets blamed because cloud was implemented wrong. No protesting will fix that. Then comes the rapid costs cutting measures. Shutting off systems not in use. Databases lose data capture for down periods. People can access systems in off hours. Work falls off and the cloud project gets scrapped for the old, cheaper way.

Cloud is the model for chargeback that should be used. But it should be noted that we need to remember those numbers need to be correctly attributed. Just pushing a set of usage statistics down without context will lead to finger pointing and scrambling for explanation. Instead, we need to provide context from the outset. Maybe Marketing used an abnormally high amount of IT resources last week. But did it have anything to do with the end of the quarter? Can we track that usage back to higher profits from sales? That context is critical to figuring out how usage statistics affect things overall.


Tom’s Take

Chargeback is the stick that we use to threaten organizations to shape up and fly right. We make plans to implement a process to track all the evil things that are hidden in a department and by the time the project is ready to kick off we find that costs are down and productivity is up. That becomes the new baseline and we go on about our day think about how chargeback would have let us catch it before it became a problem.

In reality, chargeback is a solution that will take time to implement and cost money and time to get right. We need data context and allocation. We need actionable information and the ability to coordinate across departments. We need to know where the charges are coming from and why, not just complaining about bills. And there can be no exceptions. That’s the only way to put chargeback in charge.

 

We Are Number Two!

Checklist

In my old IT life I once took a meeting with a networking company. They were trying to sell me on their hardware and get me to partner with them as a reseller. They were telling me how they were the number two switching vendor in the world by port count. I thought that was a pretty bold claim, especially when I didn’t remember seeing their switches at any of my deployments. When I challenged this assertion, I was told, “Well, we’re really big in Europe.” Before I could stop my mouth from working, I sarcastically replied, “So is David Hasselhoff.” Needless to say, we didn’t take this vendor on as a partner.

I tell this story often when I go to conferences and it gets laughs. As I think more and more about it the thought dawns on me that I have never really met the third best networking vendor in the market. We all know who number one is right now. Cisco has a huge market share and even though it has eroded somewhat in the past few years they still have a comfortable lead on their competitors. The step down into the next tier of vendors is where the conversation starts getting murky.

Who’s Number Two???

If you’ve watched a professional sporting event in the last few years, you’ve probably seen an unknown player in close up followed by an oddly specific statistic. Like Bucky has the most home runs by a left handed batter born in July against pitchers that thought The Matrix should have won an Oscar. When these strange stats are quoted, the subject is almost always the best or second best. While most of these stats are quoted by color announcers trying to fill airtime, it speaks to a larger issue. Especially in networking.

No one wants the third best anything. John Chambers is famous for saying during every keynote, “If Cisco can’t be number one or number two in a market we won’t be in it.” That’s easy to say when you’re the best. But how about the market down from there? Everyone is trying to position themselves as the next best option in some way or another. Number two by port counts. Number two by ports sold (which is somehow a different metric). Number two by units shipped or amount sold or some other way of phrasing things slightly differently in order to the viable alternative.

I don’t see this problem a lot in other areas. Servers have a clear first, second, and third order. Storage has a lot of tiering. Networking, on the other hand, doesn’t have a third best option. Yet there are way more than three companies doing networking. Brocade, HPE, Extreme, Dell, Cumulus Networks, and many more if you want to count wireless companies with switching gear for the purpose of getting into the Magic Quadrant. No one wants to be seen as the next best, next best thing.

How can we fix this? Well, for one thing we need impartial ranking. No more magical polygons and reports that take seventeen pages to say “It depends”. There are too many product categories that you can slice your solution into to be the best. Let’s get back to the easy things. Switches are campus or data center. Routers are campus or service provider. Hardware falls here or there. No unicorns. No single-product categories. If you’re the only product of your kind you are in the wrong place.


Tom’s Take

Once again, I think it’s time for a networking Consumer Reports type of publication. Or maybe something like Storage Review. We need someone to come in and tell vendors that they are the third or fourth best option out there by the following simple metrics. Yes, it’s very probable that said vendors would just ignore the ranking and continue building their own marketing bubble around the idea of being the second best switching platform for orange switches sold to school districts not named after presidents. Or perhaps finding out they are behind the others will spur people inside the company into actually getting better. It’s a faint hope, but hey. The third time’s a charm.

The Butcher’s Bill

chopping-block-Dell-EMC

Watching a real butcher work is akin to watching a surgeon. They are experts with their tools, which are cleavers and knives instead of scalpels and stitches. They know how to carve the best cut of meat from a formless lump. And they do it with the expert eye of a professional trained in their trade.

Butcher is a term that is often loaded with all manner of negative connotations. It makes readers think of indiscriminate slaughter and haphazard destruction. But the real truth is that a butcher requires time and training to cut as they do. There is nothing that a butcher does that isn’t calculated and careful.

Quick Cuts

Why all the discussion about butchers? Because you’re going to see a lot more comparisons in the future when people talk about the pending Dell/EMC acquisition. The real indiscriminate cutting has already started. EMC hid an undisclosed number of layoffs in a Dec. 31 press release. VMware is going to take a 5% hit in jobs, including the entire Workstation and Fusion teams.

It’s no secret that the deal is in trouble right now. Investors are cringing at some of the provisions. The Virtustream spin out was rescinded after backlash. The tracking stock created to creatively dodge some tax issues is now so low that it needs a ladder to tickle a snake’s belly. Every news day brings another challenge to the deal that is more likely to sink it than to save it.

In order to meet this rising tide of disillusionment, Dell and EMC are pulling out all the stops. Expect to see more ham-handed decisions in the future, like cashiering entire teams and divisions in order to get under some magical number that investors like and will be willing to support in order to make this mega merger happen. Given Michael Dell’s comments about investors during his run to make Dell a private company, I’m sure he probably has a very sour taste in his mouth thanks to all this.

Butchers work to make the best possible product from the raw materials given. There are no second chances. No do-overs. You have to get it right the first time. That very reason is why all this scrambling looks more like the throes of a desperate gambit instead of a sound merger strategy.

Prime Cuts

All companies that merge have duplicate jobs. It’s a fact of business. Much of the job overlap comes in the administrative side of the house. Legal, accounts, and management teams all have significant overlap no matter where you go. And while those teams are important for keeping the lights on and getting the bills paid, the positions represent redundancy that almost never gets trimmed away. Staff positions keep the machine moving. That means they stay.

Assuming that no one inside of either organization wants to cut staff positions, how can we approach something resembling more sane carving that accomplishes the same goals without leading to the hemorrhaging that will come from large-scale indiscriminate layoffs?

  1. Kill off needless products. While I’m sure this is an on-going process, there are some pretty easy targets for this one. Haven’t sold that SKU in two years? Gone. Wind down support and give a discounted upgrade to something you do support. Kill off SKUs that exists solely to win awards.
  2. Reduce products by collapsing product lines. You don’t need two entry-level products for iSCSI storage. Or five different enterprise-class arrays. Kill off the things that overlap or directly compete against each other. Who survives? The one that sells better. The one that has better tech. The one that costs less to support. If you’re going to pinch pennies in other places, you had better start doing it here too.
  3. Management reductions need to happen too. For all the talk of reducing engineering teams and creating synergy, it’s surprising how often managers escape the layoffs. They’re almost like professors with tenure. Well, it’s time for them to prove their worth too. If their department is gone, so are they. If they are an ineffective manager, pay them a severance and let them earn their role somewhere else all over again. And that goes double for the 500 CTOs that seem to have sprung up inside large organizations lately.

You’d think these things were obvious and easy to figure out. Yet these are the kinds of decisions that get overlooked during every merger.


Tom’s Take

Layoffs hurt lots of people. It’s never fun when your teammates and friends get sacked. But you can be smart about who goes and how best to make the new company survive and even thrive. Chopping away at the company with a machete is like a horror movie. People are going to scream and cry and you’ll be lucky to live through the end. Instead of taking that approach, be smart. Make the best cuts you can from what you’ve got. Find ways to package the parts no one might want with other parts that people find attractive. Do what you can to use as much as you can. Think like a professional butcher. Don’t act like an amateur one.

The Tortoise and the Austin Hare

Dell_Logo

Dell announced today the release of their newest network operating system, OS10 (note the lack of an X). This is an OS that is slated to build on the success that Dell has had selling 3rd party solutions from vendors like Cumulus Networks and Big Switch. OS10’s base core will be built on an unmodified Debian distro that will have a “premium” feature set that includes layer 2 and layer 3 functionality. The aim to have a fully open-source base OS in the networking space is lofty indeed, but the bigger question to me is “what happens to Cumulus”?

Storm Clouds

As of right this moment, before the release of Dell OS10, the only way to buy Linux on a Dell switch is to purchase it with Cumulus. In the coming months, Dell will begin to phase in OS10 as an option in parallel with Cumulus. This is especially attractive to large environments that are running custom-built networking today. If your enterprise is running Quagga or sFlow or some other software that has been tweaked to meet your unique needs you don’t really need a ton of features wrapped in an OS with a CLI you will barely use.

So why introduce an OS that directly competes with your partners? Why upset that apple cart? It comes down to licenses. Every time someone buys a Dell data center switch, they have to pick an OS to run on it. You can’t just order naked hardware and install your own custom kernel and apps. You have to order some kind of software. When you look at the drop-down menu today you can choose from FTOS, Cumulus, or Big Switch. For the kinds of environments that are going to erase and reload anyway the choice is pointless. It boils down to the cheapest option. But what about those customers that choose Cumulus because it’s Linux?

Customers want Linux because they can customize it to their heart’s content. They need access to the switching hardware and other system components. So long as the interface is somewhat familiar they don’t really care what engine is driving it. But every time a customer orders a switch today with Cumulus as the OS option, Dell has to pay Cumulus for that software license. It costs Dell to ship Linux on a switch that isn’t made by Dell.

OS10 erases that fee. By ordering a base image that can only boot and address hardware, Dell puts a clean box in the hands of developers that are going to be hacking the system anyway. When the new feature sets are released later in the year that increase the functionality of OS10, you will likely see more customers beginning to experiment with running Linux development environments. You’ll also see Dell beginning to embrace a model that loads features on a switch as software modules instead of purpose-built appliances.

Silver Lining

Dell’s future is in Linux. Rebuilding their OS from the ground up to utilize Linux only makes sense given industry trends. Junos, EOS, and other OSes from upstarts like Pluribus and Big Switch are all based on Linux or BSD. Reinventing the wheel makes very little sense there. But utilizing the Switch Abstraction Interface (SAI) developed for OpenCompute gives them an edge to focus on northbound feature development while leaving the gory details of addressing hardware to the abstraction talking to something below it.

Dell isn’t going to cannibalize their Cumulus partnership immediately. There are still a large number of shops running Cumulus that a are going to want support from their vendor of choice in the coming months. Also, there are a large number of Dell customers that aren’t ready to disaggregate hardware from software radically. Those customers will require some monitoring, as they are likely to buy the cheapest option as opposed to the best fit and wind up with a switch that will boot and do little else to solve network problems.

In the long term, Cumulus will continue to be a fit for Dell as long as OS10 isn’t ported to the Campus LAN. Once that occurs, you will likely see a distancing of these two partners as Dell embraces their own Linux OS options and Cumulus moves on to focus on using whitebox hardware instead of bundling themselves with existing vendors. Once the support contracts expire on the Cumulus systems supported by Dell, I would expect to see a professional services offering to help those users of Cumulus-on-Dell migrate to a “truly open and unmodified kernel”.


Tom’s Take

Dell is making strides in opening up their networking with Linux and open source components. Juniper has been doing it forever, and HP recently jumped into the fray with OpenSwitch. Making yourself open doesn’t solve your problems or conjure customers out of thin air. But it does give you a story to tell about your goals and your direction. Dell needs to keep their Cumulus partnerships going forward until they can achieve feature parity with the OS that currently runs on their data center switches. After that happens and the migration plans are in place, expect to see a bit of jousting between the two partners about which approach is best. Time will tell who wins that argument.

 

 

My Thoughts On The Death Of IP Telephony

A Candlestick Phone (image courtesy of WIkipedia)

A Candlestick Phone (image courtesy of Wikipedia)

Greg Ferro (@EtherealMind) posted a thought provoking article about collaboration in his Human Infrastructure magazine (which you should be reading). He talks about the death of IP Telephony and the rise of asynchronous communications methods like Slack. He’s got a very interesting point of view. I just happen to disagree with a few of his assertions.

IP Telephony Is Only Mostly Dead

Greg’s stance that IP Telephony is dead is a bit pointed to say the least. He is correct that the market isn’t growing. It is also true that a great number of new workers entering the workforce prefer to use their smartphones for communications, especially the asynchronous kind. However, desk phones are a huge part of corporate communications going forward.

IT shops have a stilted and bizarre world view. If you have a workforce that has to be mobile, whether it be for making service calls or going to customer sites for visits, you have a disproportionately large number of mobile users for sure. But what about organizations that don’t have large mobile populations? What about financial firms or law offices or hospitals? What about retail organizations? These businesses have specific needs for communications, especially with external customers and users.

Imagine if your pharmacy replaced their phone with a chat system? How about your doctor’s office throwing out their PBX and going to an email-only system? How would you feel about that? A couple of you might cheer because they finally “get it”. But a large number of people, especially more traditional non-technical folks, would switch providers or move their business elsewhere. That’s because some organizations rely on voice communications. For every millennial dumping their office phone to use a mobile device there is still someone they need to call on the other end.

We’re not even talking about the important infrastructure that still needs a lot of specialized communications gear. Fax machines are still a huge part of healthcare and legal work. Interactive Voice Response (IVR) systems are still crucial to handle call volumes for things like support lines. These functions can’t be replaced on mobile devices easily. You can fake IVRs and call queuing with the right setup, but faxing things to a mobile device isn’t possible.

Yes, services do exist to capture fax information as a TIFF or JPG and email it to the destination. But for healthcare and legal, this breaks confidentiality clauses and other important legal structures. The area around secure faxing via email is still a bit murky, and most of the searches you can do for the topic revolve around companies trying to tell you that it’s acceptable and okay to use (as long as you use their product).

IP Telephony isn’t far removed from buggy whip manufacturers. The oft-cited example of a cottage industry has relevance here. At some point after the introduction of the automobile, buggy whip growth slowed and eventually halted. But they didn’t go away permanently. The market did contract and still exists to this day. It’s not as big as the 13,000-strong market it once was, but it does exist today to meet a need that people still have. Likewise, IP Telephony will still have solutions to meet needs for specific customers. Perhaps we’ll contract down to two or three providers at some point in the future, but it will never really go away.

I’ll Have My People IM Your People

Contacting people is an exercise today. There are so many ways to reach someone via various communications that you have to spend time figuring out how to reach them. Direct message, Text message, Phone call, Voice Mail, Email, and smoke signals are all valid communications forms. It is true that a lot of these communications are moving toward asynchronous methods. But as mentioned above, a lot of customer-facing businesses are still voice-only.

Sales is one of these areas that is driven by sound. The best way to sell something to someone is to do it face-to-face. But a phone call is a close second. Sales works because you can use your voice to influence buyers. You can share their elation and allay their fears. You can be soothing or exciting or any range of emotion in between. That’s something you don’t get through the cold text of instant messaging or email.

It’s also much harder to ignore phone calls. Sure, you can send read receipts with emails but these are rarely implemented and even more rarely used correctly in my experience. Phone calls alert people about intent. Even ignoring or delaying them means being send to a voice mail box or to another phone in the department where your call can be dealt with. The synchronous nature of the communication means there has to be a connection with someone. You can’t just launch bytes of text into the ether and hope it gets where it’s supposed to go.

It is true that these voice communications happen via mobile numbers more often than not in this day and age. But corporations still prefer those calls to go through some kind of enterprise voice system. Those systems can track communications and be audited. Records can be subpoenaed for legal reasons without needing to involve carriers.

It’s much easier for call centers to track productivity via phone logs than seeing who is on the phone. If you’ve ever worked in a corporate call center, you know there are metrics for everything you do on the phone. Average call time, average wait time, amount of non-call time, and so on. Each of these metrics can be tracked via a desk phone with a headset, not so with a mobile phone and an app.


Tom’s Take

I live on my mobile phone. I send emails and social media updates. I talk in Slack and Skype and other instant messaging platforms. But I still get on the phone at least three times a week to talk to someone. Most of those calls take place on a conference bridge. That’s because people want to hear someone’s voice. It’s still comforting and important to listen to someone.

Doing away with IP Telephony sounds like an interesting strategy for small businesses and startups. It’s a cost-reduction method that has benefits in the short term. But as companies grow and change they will soon find that having a centralized voice system to control and manipulate calls is a necessity. Given the changes in voice technology in the last few years, I highly expect that “centralized” voice will eventually be a pay-per-seat cloud leased model with specific executives and support personnel using traditional phones while non-critical employees have no voice communications device or choose to use their personal mobile device.

IP Telephony isn’t dead. It’s not even dying. But it’s well past the age where it needs to consider retirement and a long and fulfilling life concentrating on specific people instead of trying to make everyone happy.

 

 

It’s Time For IPv6, Isn’t It?

 

I made a joke tweet the other day:

It did get lots of great interaction, but I feel like part of the joke was lost. Every one of the things on that list has been X in “This is the Year of X” for the last couple of years. Which is sad because I would really like IPv6 to be a big part of the year.

Ars Technica came out with a very good IPv6-focused article on January 3rd talking about the rise in adoption to 10% and how there is starting to be a shift in the way that people think about IPv6.

Old and Busted

One of the takeaways from the article that I found most interesting was a quote from Brian Carpenter of The University of Aukland about address structure. Most of the time when people complain about IPv6, they say that it’s stupid that IPv6 isn’t backwards compatible with IPv4. Carpenter has a slightly different take on it:

The fact that people don’t understand: the design flaw is in IPv4, which isn’t forwards compatible. IPv4 makes no allowance for anything that isn’t a 32 bit address. That means that, whatever the design of IPng, an IPv4-only node cannot communicate directly with an IPng-only node.

That’s a very interesting take on the problem that hadn’t occurred to me before. We’ve spent a lot of time trying to make IPv6 work with IPv4 in a way that doesn’t destroy things when the problem has nothing to do with IPv6 at all!

The real issue is that our aging IPv4 protocol just can’t be fixed to work with anything that isn’t IPv4. When you frame the argument in those terms you can start to realize why IPv4’s time is coming to an end. I’ve been told by people that moving to 128-bit addressing is overkill and that we need to just put another octet on the end of IPv4 and make them compatible so we can use things as they are for a bit longer.

Folks, the 5th octet plan would work exactly like IPv6 as far as IPv4 is concerned. The issue boils down to this: IPv4 is hard-coded to reject any address that isn’t exactly 32-bits in length. It doesn’t matter if your proposal is 33 bits or 256 bits, the result is the same: IPv4 won’t be able to directly talk to it. The only way to make IPv4 talk to any other protocol version would be extend it. And the massive amount of effort that it would take to do that is why we have things like dual stack and translation gateways for IPv6. Every plan to make IPv4 work a little longer ends in the same way: scrap it for something new.

New Hotness

Fresh from our take on how IPv4 is a busted protocol for the purposes of future proofing, lets take a look at what’s driving IPv6 right now. I got an email from my friend Eric Hileman, who runs a rather large network, asking me when he should consider his plans to transition to IPv6. My response was “wait for mobile users to force you there”.

Mobility is going to be the driving force behind IPv6 adoption. Don’t believe me? Grab the closest computing device to your body right now. I’d bet more than half of you reached for a phone or a tablet if you didn’t already have a smartwatch on your wrist. We are a society that is embracing mobile devices at an increasingly rapid rate.

Mobility is the new consumer compute. That means connectivity. Connectivity everywhere. My children don’t like not being able to consume media away from wireless access points. They would love to have a cellular device to allow them access to TV shows, movies, or even games. That generation is going to grow up to be the primary tech consumer in the next ten years.

In those intervening years, our tech infrastructure is going to balloon like never before. Smart devices will be everywhere. We are going to have terabytes of data from multiple sources flooding collectors to produce analysis that must be digested by some form of intelligence, whether organic or artificial. How do you think all that data is going to be transmitted? On a forty-year-old protocol with no concept of the future?

IPv6 has to become the network protocol to support future infrastructure. Mobility is going to drive adoption, but the tools and software we build around mobility is going to push new infrastructure adoption as well. IPv6 is going to be a huge part of that. Devices that don’t support IPv6 are going to be just like the IPv4 protocol they do support – forever stuck in the past with no concept of how the world is different now.


Tom’s Take

It’s no secret I’m an IPv6 champion. Even my distaste for NAT has more to do with its misuse with regard to IPv6 than any dislike for it as a protocol. IPv6 is something that should have been recognized ten years ago as the future of network addressing. When you look at how fast other things around us transform, like mobile computing or music/video consumption, you can see that technology doesn’t wait for stalwarts to figure things out. If you don’t want to be using IPv4 along with your VCR it’s time to start planning for how you’re going to use it.