Intel and the Network Arms Race

IntelLogo

Networking is undergoing a huge transformation. Software is surely a huge driver for enabling technology to grow by leaps and bounds and increase functionality. But the hardware underneath is growing just as much. We don’t seem to notice as much because the port speeds we deal with on a regular basis haven’t gotten much faster than the specs we read about years go. But the chips behind the ports are where the real action is right now.

Fueling The Engines Of Forwarding

Intel has jumped into networking with both feet and is looking to land on someone. Their work on the Data Plane Development Kit (DPDK) is helping developers write code that is highly portable across CPU architecture. We used to deal with specific microprocessors in unique configurations. A good example is Dynamips.

Most everyone is familiar with this program or the projects that spawned, Dynagen and GNS3. Dynamips worked at first because it emulated the MIPS processor found in Cisco 7200 routers. It just happened that the software used the same code for those routers all the way up to the first releases of the 15.x train. Dynamips allowed for the emulation of Cisco router software but it was very, very slow. It almost didn’t allow for packets to be processed. And most of the advanced switching features didn’t work at all thanks to ASICs.

Running networking code on generic x86 processors doesn’t provide the kinds of performance that you need in a network switching millions of packets per second. That’s why DPDK is helping developers accelerate their network packet forward to approach the levels of custom ASICs. This means that a company could write software for a switch using Intel CPUs as the base of the system and expect to get good performance out of it.

Not only can you write code that’s almost as good as the custom stuff network vendors are creating, but you can also have a relative assurance that the code will be portable. Look at the pfSense project. It can run on some very basic hardware. But the same code can also run on a Xeon if you happen to have one of those lying around. That performance boost means a lot more packet switching and processing. No modifications to the code needed. That’s a powerful way to make sure that your operating system doesn’t need radical modifications to work across a variety of platforms, from SMB and ROBO all the way to an enterprise core device.

Fighting The Good Fight

The other reason behind Intel’s drive to get DPDK to everyone is to fight off the advances of Broadcom. It used to be that the term merchant silicon meant using off-the-shelf parts instead of rolling your own chips. Now, it means “anything made by Broadcom that we bought instead of making”. Look at your favorite switching vendor and the odds are better than average that the chipset inside their most popular switches is a Broadcom Trident, Trident 2, or even a Tomahawk. Yes, even the Cisco Nexus 9000 runs on Broadcom.

Broadcom is working their way to the position of arms dealer to the networking world. It soon won’t matter what switch wins because they will all be the same. That’s part of the reason for the major differentiation in software recently. If you have the same engine powering all the switches, your performance is limited by that engine. You also have to find a way to make yourself stand out when everything on the market has the exact same packet forwarding specs.

Intel knows how powerful it is to become the arms dealer in a market. They own the desktop, laptop, and server market space. Their only real competition is AMD, and one could be forgiven for arguing that the only reason AMD hasn’t gone under yet is through a combination of video card sales and Intel making sure they won’t get in trouble for having a monopoly. But Intel also knows what it feels like to miss the boat on a chip transition. Intel missed the mobile device market, which is now ruled by ARM and custom SoC manufacturing. Intel needs to pull off a win in the networking space with DPDK to ensure that the switches running in the data center tomorrow are powered by x86, not Broadcom.


Tom’s Take

Intel’s on the right track to make some gains in networking. Their new Xeon chips with lots and lots of cores can do parallel processing of workloads. Their contributions to CoreOS will help the accelerate the adoption of containers, which are becoming a standard part of development. But the real value for Intel is helping developers create portable networking code that can be deployed on a variety of devices. That enables all kinds of new things to come, from system scaling to cloud deployment and beyond.

Video And The Death Of Dialog

video

I was reading a trivia article the other day about the excellent movie Sex, Lies, & Videotape when a comment by the director, Stephen Soderbergh, caught my eye. The quote, from this article talks about how people use video as a way to distance ourselves from events. Soderbergh used it as a metaphor in a movie made in 1989. In today’s society, I think video is having this kind of impact on our careers and our discourse in a much bigger way.

Writing It Down In Pictures

People have become huge consumers of video. YouTube gets massive amounts of traffic. Devices have video recording capabilities built in. It’s not uncommon to see a GoPro camera attached to anything and everything and see people posting videos online of things that happen.

My son is a huge fan of videos about watching other people play video games. He’ll watch hours of video of someone playing a game and narrating the experience. When I tell him that he’s capable of playing the game himself he just tells me, “It’s not as fun that way Dad.” I, too, have noticed that a lot of things that would normally have been written down are narrated as videos today.

A great example of this is the Stuck in Traffic video blog series from J Wolfgang Goerlich (@JWGoerlich). These videos are great examples of things that would have been blog posts just a few years ago but have become videos that were narrated and posted to a channel for people to consume. This is also the way that podcasts have risen to dominate the attention of people looking to consume information. But video requires a bit more attention as compared to audio-only discussions.

One of the big issues I see with videos is that they are not living, breathing documents. They exist as they are created with no way to modify the content short of destroying it and recreating it. If I write a blog post and make a factual error, it’s very easy to fix that issue. I can write a note about how I made a mistake and someone pointed it out. Or, in some cases, I can write a whole new post about the error and how I figured out what was wrong.

But video is different. I find all too often that people make factual errors in videos by either misstating something or being plain mistaken. Usually these errors aren’t corrected in the process because the subject is unaware of things. But instead of being able to correct it with a follow up or a postscript, most video producers are forced to cover the incorrect comment with a large annotation in the video window pointing out that they were wrong and force the viewer to pay attention to the comment and not the spoken word or written word in the original video.

The lack of ability to correct problems and create living documents is a huge one for video creators. Errors can’t be easily fixed. On platforms like YouTube you can’t even upload a new video in place of the old one without destroying all of your views and comments. It makes mea culpas a huge pain. There’s no process to fix things unless you catch them before posting.

On Broadcast With No Mic

The other thing that bothers me the most about video is actually very similar to  the reason why I hate keynotes. The whole process of broadcasting a message without soliciting feedback is irritating at best. With a blog post, you can have comments and discussions and even more posts about subjects that go on to create commentary. Videos are static. You can’t start watching one and them come back to it later like you can with a blog post. Of course videos can be paused, but the vast majority of video creators do their best to minimize the amount of dead air in a video.

It’s also very difficult to create discussion with videos. Instead of being able to address points one at a time and create dialog there, you are forced to address or refute points in a series with no stopping. That makes it easy to overlook things without realizing that you missed a great idea or you could have summed things up with a very easy point somewhere else.

What’s missing is the ability to let conversations develop. If you think of Slack and email as the ultimate form of conversation, video is the ultimate form of one-sided discussion. Television, movies, and other video sources are designed to deliver content with no regard for feedback going the other direction. That is the key that is missing to make video be something beyond a simple broadcast medium.


Tom’s Take

Video is a tool designed to get your views across with minimal input from viewers. It took me a while to realize this until I heard my son “closing” a video with the standard “like, share, and subscribe” type of sign off. There wasn’t any mention of leaving comments or creating a video reply. It was really at that point that a I realized that video blogs and channels are the pinnacle of insulating us from the audience. All a creator needs to do it post videos and turn off comments and you can almost guarantee that they can continue creating messages that people will hear but never be able to respond to.

Wireless As We Know It Is Dead

WirelessTombstone

Congratulations! We have managed to slay the beast that is wireless. We’ve driven a stake through it’s heart and prevented it from destroying civilization. We’ve taken a nascent technology with potential and turned it into the same faceless corporate technology as the Ethernet that it replaced. Alarmist? Not hardly. Let’s take a look at how 802.11 managed to come to an inglorious end.

Maturing Or Growing Up

Wireless used to be the wild frontier of networking. Sure, those access points bridged to the traditional network and produced packets and frames like all the other equipment. But wireless was unregulated. It didn’t conform to the plans of the networking team. People could go buy a wireless access point and put it under their desk to make that shiny new laptop with 802.11b work without needing to be plugged in.

Wireless used to be about getting connectivity. It used to be about squirreling away secret gear in the hopes of getting a leg up on the poor schmuck in the next cube that had to stay chained to his six feet of network connectivity under the desk. That was before the professionals came in. They changed wireless. They put a suit on it.

Now, wireless isn’t about making my life easier. It’s about advancing the business. It’s about planning and preparation and enabling applications. It’s about buying lots of impressively-specced access points every three years to light up new wings of the building. It’s about surveying for coverage and resource management to make sure the signal is strong everywhere. Everyone has to play nice and understand the rules.

Wireless professionals are the worst of the lot. They used to deal in black magic and secret knowledge that made them the most valuable people on the planet. They alone knew the secrets of how spectrum worked or what co-channel interference was. That was before the dark times. Before people wanted to learn more about it. Now, we can teach people these concepts. How to use tools to fix problems. Why things must be laid out in certain ways to maximize usefulness. We’ve made everyone special.

Now, the business doesn’t want wizards with strange work habits and even stranger results. They want the same predictable group that they’ve gotten for the last decade with the network team. They want people to blame when their application is slow. They want the infrastructure to work full time in every little corner of the building. And when it doesn’t, they want to know whose head must roll for this affront!

The Establishment

Another thing that destroyed wireless was everyone’s attempt to make it mainstream. Gartner’s Wired and Wireless reports didn’t help. Neither did the push to create tools that make it easy to diagnose issues with a minimum of effort. Now, companies think that wireless is something that just happens. Something that doesn’t take planning to execute. Now, wireless professionals are fired or marginalized because it shouldn’t take that much money to configure something so simple, right?

Why do wireless people need professional development? The networking team gets by with reading those old dusty books. How much can wireless really change year to year? It just gets faster and more expensive. Why should you have to learn how to put up those little access points all over again?

Now that wireless is a part of the infrastructure like switches and routers, it’s time to be forgotten. Now the business needs to focus on other technology that’s likely to be implemented incorrectly that doesn’t support the mission of the business. You know, the kinds of things that we read about in industry trade magazines that they use in sports stadiums or hospitals that sound really awesome and can’t be all that expensive, right?


Tom’s Take

We killed wireless because we used it to do the job it was designed to do. We made it boring and useful and pervasive. As soon as a technology achieves that level of use it naturally becomes something unimportant. Which you will be quick to argue about until you realize that you’re probably reading this from a smartphone that is so commonplace you forget you’re using it.

Now we talk about the apps and technology we’re building on top of wireless. Mobility, location, and other things that are more appealing to people shelling out money to buy things. Buyers don’t want boring. They want expensive gadgets they can point to and loudly proclaim that they spent a lot for this bauble.

Wireless is a victim of its own success. We fought to make it a part of the mainstream and now that it is no one cares about it any more. Now that we take it for granted we must accept that it’s not a “thing” any more. It just is.

Thoughts On Encryption

encryption

The debate on encryption has heated up significantly in the last couple of months. Most of the recent discussion has revolved around a particular device in a specific case but encryption is older than that. Modern encryption systems represent the culmination of centuries of development of making sure things aren’t seen.

Encryption As A Weapon

Did you know that twenty years ago the U.S. Government classified encryption as a munition? Data encryption was classified as a military asset and placed on the U.S. Munitions List as an auxiliary asset. The control of encryption as a military asset meant that exporting strong encryption to foreign countries was against the law. For a number of years the only thing that could be exported without fear of legal impact was regular old Data Encryption Standard (DES) methods. Even 3DES, which is theoretically much stronger but practically not much better than it’s older counterpart, was restricted for export to foreign countries.

While the rules around encryption export have been relaxed since the early 2000s, there are still some restrictions in place. Those rules are for countries that are on U.S. Government watch lists for terror states or governments deemed “rogue” states. This is why you must attest to not living in or doing business with one of those named countries when you download any software that contains encryption protocols. The problem today is that almost every piece of software includes some form of encryption thanks to ubiquitous functions like OpenSSL.

Even the father of Pretty Good Privacy (PGP) was forced to circumvent U.S. Law to get PGP into the hands of the world. Phil Zimmerman took the novel approach of printing the entirety of PGP’s source code in book form and selling it far and wide. Since books are protected speech, no one could stop them from being sold. The only barrier to creating PGP from the text was how fast one could type. Today, PGP is one of the most popular forms of encrypting written communications, such as emails.

Encryption As A Target

Today’s issues with encryption are rooted in the idea that it shouldn’t be available to people that would use it for “bad” reasons. However, instead of being able to draw a line around a place on a map and say “The people inside this line can’t get access to strong encryption”, we now live in a society where strong encryption is available on virtually any device to protect the growing amount of data we store there. Twenty years ago no one would have guessed that we could use a watch to pay for coffee, board an airplane, or communicate with loved ones.

All of that capability comes with a high information cost. Our devices need to know more and more about us in order to seem so smart. The amount of data contained in innocuous things causes no end of trouble should that data become public. Take the amount of data contained on the average boarding pass. That information is enough to know more about you than is publicly available in most places. All from one little piece of paper.

Keeping that information hidden from prying eyes is the mission of encryption. The spotlight right now is on the government and their predilection to looking at communications. Even the NSA once stated that strong encryption abroad would weaken the ability of their own technology to crack signal intelligence (SIGINT) communications. Instead, the NSA embarked on a policy of sniffing the data before it was ever encrypted by installing backdoors in ISPs and other areas to grab the data in flight. Add in the recent vulnerabilities found in the key exchange process and you can see why robust encryption is critical to protecting data.

Weakening encryption to enable it to be easily overcome by brute force is asking for a huge Pandora’s box to be opened. Perhaps in the early nineties it was unthinkable for someone to be able to command enough compute resources to overcome large number theory. Today it’s not unheard of to have control over resources vast enough to reverse engineer simple problems in a matter or hours or days instead of weeks or years. Every time a new vulnerability comes out that uses vast computing power to break theory it weakens us all.


Tom’s Take

Encryption isn’t about one device. It’s not about one person. It’s not even about a group of people that live in a place with lines drawn on a map that believe a certain way. It’s about the need for people to protect their information from exposure and harm. It’s about the need to ensure that that information can’t be stolen or modified or rerouted. It’s not about setting precedents or looking good for people in the press.

Encryption comes down to one simple question: If you dropped your phone on the floor of DefCon or BlackHat, would you feel comfortable knowing that it would take longer to break into than the average person would care to try versus picking on an easier target or a more reliable method? If the answer to that question is “yes”, then perhaps you know which side of the encryption debate you’re on without even asking the question.

Don’t Touch My Mustache, Aruba!

dont-touch-my-mustache

It’s been a year since Aruba Networks became Aruba, a Hewlett-Packard Enterprise Company. It’s  been an interesting ride for everyone involved so far. There’s been some integration between the HPE Networking division and the Aruba teams. There’s been presentations and messaging and lots of other fun stuff. But it all really comes down to the policy of non-interference.

Don’t Tread On Me

HPE has done an admirable job of keeping their hands off of Aruba. It sounds almost comical. How many companies have acquired a new piece and then done everything possible to integrate it into their existing core business? How many products have had their identity obliterated to fit in with the existing model number structure?

Aruba isn’t just a survivor. It’s come out of the other side of this acquisition healthy and happy and with a bigger piece of the pie. Dominick Orr didn’t just get to keep his company. Instead, he got all of HPE’s networking division in the deal! That works out very well for them. It allows Aruba to help integrate the HPE networking portfolio into their existing product lines.

Aruba had a switching portfolio before the acquisition. But that was just an afterthought. It was designed to meet the insane requirements of the new Gartner Wired and Wireless Magic Quadrant. It was a product pushed out to meet a marketing need. Now, with the collaboration of both HPE and Aruba, the combined business unit has succeeded in climbing to the top of the mystical polygon and assuming a leading role in the networking space.

Could you imagine how terrible it would have been if instead of taking this approach, HPE had instead insisted on integration of the product lines and renumbering of everything? What if they had insisted that Aruba engineers, who are experts in their wireless field, were to become junior to the HPE wireless teams? That’s the kind of disaster that would have led to the fall of HPE Networking sooner or later. When good people get alienated in an acquisition, they flee for the hills as fast as their feet will carry them. One look at the list of EMC and VMware departures will tell you the truth of that.

You’re Very Welcome

The other thing that makes it an interesting ride is the way that people have reacted to the results of the acquisition. I can remember seeing how folks like Eddie Forero (@HeyEddie) were livid and worried about how the whole mess was going to fall apart. Having spoken to Eddie this week about the outcome one year later, he seems to be much, much more positive than he was in the past. It’s a very refreshing change!

Goodwill is something that is very difficult to replace in the community. It takes ages to earn and seconds to destroy. Acquiring companies that don’t understand the DNA of the company they have acquired run the risk of alienating the users of that solution. It’s important to take stock of how you are addressing your user base and potential customers regularly after you bring a new business into the fold.

HPE has done a masterful job of keeping Aruba customers happy by allowing Aruba to keep their communities in place. Airheads is a perfect example. Aruba’s community is a vibrant place where people share knowledge and teach each other how to best utilize solutions. It’s the kind of place that makes people feel welcome. It would have been very easy for HPE to make Airheads conform to their corporate policies and use their platforms for different purposes, such as a renewed focus on community marketing efforts. Instead, we have been able to keep these resources available to all to keep a happy community all around.


Tom’s Take

The title above is actually holds a double meaning. You might think it refers to keeping your hands off of something. But “don’t touch my mustache” is a mnemonic phrase to help people remember the Japanese phrase do itashimashite which means “You’re Welcome”.

Aruba has continued to be a leader in the wireless community and is poised to make waves in the networking community once more because HPE has allowed it to grow through a hands-off policy. The Aruba customers and partners should be very welcome that things have turned out as they have. Given the graveyard of failed company acquisitions over they years, Aruba and HPE are a great story indeed.

Slacking Off

A Candlestick Phone (image courtesy of WIkipedia)

A Candlestick Phone (image courtesy of WIkipedia)

There’s a great piece today on how Slack is causing disruption in people’s work habits. Slack is a program that has dedicated itself to getting rid of email, yet we now find ourselves mired in Slack team after Slack team. I believe the real issue isn’t with Slack but instead with the way that our brains are wired to handle communication.

Interrupt Driven

People get interrupted all the time. It’s a fact of life if you work in business, not just IT. Even if you have your head down typing away at a keyboard and you’ve closed out all other forms of distraction, a pop up from an email or a ringing or vibrating phone will jar your concentration out of the groove and force your brain to deal with this new intruder into your solitude.

That’s evolution working against you. When we were hunters and gatherers our brain had to learn how to deal with external threats when we were focused on a task like stalking a mammoth or looking for sprouts on the forest floor. Our eyes are even developed to take advantage of this. Your peripheral vision will pick up movement first, followed by color, then finally it can discern the shape of an object. So when your email notifier slides out from the system tray or notification window it triggers your primitive need to address the situation.

In the modern world we don’t hunt mammoths or forage for shoots any longer. Instead, our survival instinct has been replaced by the need to answer communications as fast as possible. At first it was returning phone calls before the end of the day. Then it became answering emails expediently. That changed into sending an immediate email response that you had seen the email and were working on a response. Then came instant messaging for corporate environments and the idea of “presence”, which allows everyone to know what you’re doing and when you’re doing it. Which has led us to ever presence – the idea that we’re never really not available.

Think about the last time you saw someone was marked unavailable in a chat window and you sent the message anyway. Perhaps you thought they would see the message the next time they logged in or returned to their terminal. Or perhaps you guessed that they had set their status as away to avoid distraction. Either way, the first thought you had was that this person wasn’t really gone and was available.

Instant messaging programs like Slack bridge the gap because synchronous communications channels like phone calls and asynchronous channels like email. In the past, we could deal with phone calls because it required the attention of both parties involved. A single channel was opened and it was very apparent that you were holding a conversation, at least until the invention of call waiting. On the other hand, email is asynchronous by nature because we can put all of our thoughts down in a single message over the course of minutes or even hours and send it into the void. Reliable delivery ensures that it will make it to the destination but we don’t know when it will be read. We don’t know when the response will come or in what form. The receiving party may not even read your message!

The Need to Please

Think back to the last time you responded to an email. How often did you start your response with “Sorry for the delay” or some version of that phrase? In today’s society, we’ve become accustomed to instant responses to things. Amy Lewis (@CommsNinja) is famous for having an escalation process for reaching her:

  1. Text message
  2. Twitter DM
  3. Email
  4. Phone Call
  5. Anything else
  6. Voice mail

She prefers instant communication and rapid response. In a lot of cases, this is very crucial. If you need an answer to a question quickly there are ways to reach people for immediate reply. But the desire to have immediate response for all forms of communication is a bit much.

Our brains don’t help us in this matter. When we get an email or a communication from someone, we feel compelled to respond to it. It’s like a checkbox that needs to be checked. And so we will drop everything else to work on a reply even if it means we’re displeasing someone for a short time to please someone immediately.

Many of the time management systems that have been created to deal with massive email flows, such as GTD are centered on the idea of dealing with things as the come in and pigeonholing them until they can be dealt with appropriately. By treating everything the same you disappoint everyone equally until everything can evaluated. There are cutouts for high priority communications, but the methods themselves tell you to keep those exceptions small and rare so as not to disrupt the flow of things.

The key to having coherent and meaningful conversations with other people is the same online as it is in person. Rather than speaking before you think, you should take the time to consider your thoughts and respond with appropriately measured words. It’s easier to do this via email since there is built-in delay but it works just the same in instant message conversations as well. An extra minute of thought won’t make someone angry with you, but not taking that extra minute could make someone very cross with you down the road.


Tom’s Take

I agree with people that say Slack is great for small teams spread everywhere to help categorize thoughts and keep projects on track. It takes away the need for a lot of status update emails and digests of communications. It won’t entirely replace email for communications and it shouldn’t be seen that way. Instead, the important thing to realize about programs like Slack is that they will start pushing your response style more toward quick replies with little information. You will need to make a conscious decision to push back a bit to make measured responses to things with more information and less response for the sake of responding. When you do you’ll find that instant messaging tools augment your communications instead of complicating them.

Drowning in the Data of Things

DrowningSign

If you saw the news coming out of Cisco Live Berlin, you probably noticed that Internet of Things (IoT) was in every other announcement. I wrote about the impact of the new Digital Ceiling initiative already, but I think that IoT is a bit deeper than that. The other thing that seems to go hand in hand with discussion of IoT is big data. And for most of us, that big data is going to be a big problem.

Seen And Not Heard

Internet of Things is about dumb devices getting smart. Think Flowers for Algernon. Only now, instead of them just being smarter they are also going to be very talkative too. The amount of data that these devices used to hold captive will be unleashed on something. We assume that the data is going to be sent to a central collection point or polled from the device by an API call or a program that is mining the data for another party. But do you know who isn’t going to be getting that data? Us.

IoT devices are going to be talking to providers and data collection systems and, in a lot of cases, each other. But they aren’t going to be talking directly to end users or IT staff. That’s because IoT is about making devices intelligent enough to start making their own decisions about things. Remember when SDN came out and everyone started talking about networks making determinations about forwarding paths and topology changes without human inputs? Remember David Meyer talking about network fragility?

Now imagine that’s not the network any more. Imagine it’s everything. Devices talking to other devices and making decisions without human inputs. IoT gives machines the ability to make a limited amount of decisions based on data inputs. Factory floor running a bit too hot for the milling machines to keep up? Talk to the environmental controls and tell it to lower the temperature by two degrees for the next four hours. Is the shelf in the fridge where the milk is stored getting close to the empty milk jug weight? Order more milk. Did a new movie come out on Netflix that meets your viewing guidelines? Add that movie to the queue and have the TV turn on when your phone enters the house geofence.

Think about those processes for moment. All of them are fairly easy conditional statements. If this, then do that. But conditional statements aren’t cut and dried. They require knowledge of constraints and combinations. And all that knowledge comes from data.

More Data, More Problems

All of that data needs to be collected somehow. That means transport networks are going to be stressed now that there are ten times more devices chatting on them. And a good chunk of those devices, especially in the consumer space, are going to be wireless. Hope your wireless network is ready for that challenge. That data is going to be transported to some data sink somewhere. As much as we would like to hope that it’s a collector on our network, the odds are much better that it’s an off-site collector. That means your WAN is about to be stressed too.

How about storing that data? If you are lucky enough to have an onsite collection system you’d better start buying drives for it now. This is a huge amount of data. Nimble Storage has been collecting analytics data from their storage arrays for a while now. Every four hours they collect more data than there are stars in the Milky Way. Makes you wonder where they keep it? And how long are they going to keep that data? Just like the crap in your attic that you swear you’re going to get around to using one day, big data and analytics platforms will keep every shred of information you want to keep for as long you want to have it taking up drive space.

And what about security? Yeah, that’s an even scarier thought. Realize that many of the breaches we’ve read about in the past months have been hackers having access to systems for extended periods of time and only getting caught after they have exfiltrated data from the system. Think about what might happen if a huge data sink is sitting around unprotected. Sure, terabytes worth of data may be noticed if someone tries to smuggle it out of the DLP device. But all it takes is a quick SQL query against the users tables for social security numbers, a program to transpose those numbers into letters to evade the DLP scanner, and you can just email the file to yourself. Script a change from letters back to numbers and you’ve got a gold mine that someone left unlocked and lying around. We may be concentrating on securing the data in flight right now, but even the best armored car does no good if you leave the bank vault door open.


Tom’s Take

This whole thing isn’t rain clouds and doom and gloom. IoT and Big Data represent a huge challenge for modern systems planning. We have the ability to unlock insight from devices that couldn’t tell us their secrets before. But we have to know how deep that pool will be before we dive in. We have to understand what these devices represent before we connect them. We don’t want our thermostats DDoSing our home networks any more than we want the milling machines on the factory floor coming to life and trying to find Sarah Connor. But the challenges we have with transporting, storing, and securing the data from IoT devices is no different than trying to program on punch cards or figure out how to download emails from across the country. Technology will give us the key to solve those challenges. Assuming we can keep our head above water.

 

Will Cisco Shine On?

Digital Lights

Cisco announced their new Digital Ceiling initiative today at Cisco Live Berlin. Here’s the marketing part:

And here’s the breakdown of protocols and stuff:

Funny enough, here’s a presentation from just three weeks ago at Networking Field Day 11 on a very similar subject:

Cisco is moving into Internet of Things (IoT) big time. They have at least learned that the consumer side of IoT isn’t a fun space to play in. With the growth of cloud connectivity and other things on that side of the market, Cisco knows that is an uphill battle not worth fighting. Seems they’ve learned from Linksys and Flip Video. Instead, they are tracking the industrial side of the house. That means trying to break into some networks that are very well put together today, even if they aren’t exactly Internet-enabled.

Digital Ceiling isn’t just about the PoE lighting that was announced today. It’s a framework that allows all other kinds of dumb devices to be configured and attached to networks that have intelligence built in. The Constrained Application Protocol (CoaP) is designed in such a way as to provide data about a great number of devices, not just lights. Yet lights are the launch “thing” for this line. And it could be lights out for Cisco.

A Light In The Dark

Cisco wants in on the possibility that PoE lighting will be a huge market. No other networking vendor that I know of is moving into the market. The other building automation company has the manufacturing chops to try and pull off an entire connected infrastructure for lighting. But lighting isn’t something to take lightly (pun intended).

There’s a lot that goes into proper lighting planning. Locations of fixtures and power levels for devices aren’t accidents. It requires a lot of planning and preparation. Plan and prep means there are teams of architects and others that have formulas and other knowledge on where to put them. Those people don’t work on the networking team. Any changes to the lighting plan are going to require input from these folks to make sure the illumination patterns don’t change. It’s not exactly like changing a lightbulb.

The other thing that is going to cause problems is the electrician’s union. These guys are trained and certified to put in anything that has power running to it. They aren’t just going to step aside and let untrained networking people start pulling down light fixtures and put up something new. Finding out that there are new 60-watt LED lights in a building that they didn’t put up is going to cause concern and require lots of investigation to find out if it’s even legal in certain areas for non-union, non-certified employees to install things that are only done by electricians now.

The next item of concern is the fact that you now have two parallel networks running in the building. Because everyone that I’ve talked to about PoE Lighting and Digital Ceiling has had the same response: Not On My Network. The switching infrastructure may be the same, but the location of the closets is different. The requirements of the switches are different. And the air gap between the networks is crucial to avoid any attackers compromising your lighting infrastructure and using it as an on-ramp into causing problems for your production data network.

The last issue in my mind is the least technically challenging, but the most concerning from the standpoint of longevity of the product line – Where’s the value in PoE lighting? Every piece of collateral I’ve seen and every person I’ve heard talk about it comes back to the same points. According to the experts, it’s effectively the same cost to install intelligent PoE lighting as it is to stick with traditional offerings. But that “effective” word makes me think of things like Tesla’s “Effective Lease Payment”.

By saying “effective”, what Cisco is telling you is that the up-front cost of a Digital Ceiling deployment is likely to be expensive. That large initial number comes down by things like electricity cost savings and increased efficiencies or any one of another of clever things that we tell each other to pretend that it doesn’t cost lots of money to buy new things. It’s important to note that you should evaluate the cost of a Digital Ceiling deployment completely on its own before you start taking into account any kind of cost savings in an equation that come months or years from now.


Tom’s Take

I’m not sure where IoT is going. There’s a lot of learning that needs to happen before I feel totally comfortable talking about the pros and cons of having billions of devices connected and talking to each other. But in this time of baby steps toward solutions, I can honestly say that I’m not entirely sold on Digital Ceiling. It’s clever. It’s catchy. But it ultimately feels like Cisco is headed down a path that will lead to ruin. If they can get CoAP working on many other devices and start building frameworks and security around all these devices then there is a chance that they can create a lasting line of products that will help them capitalize on the potential of IoT. What worries me is that this foray into a new realm will be fraught with bad decisions and compromises and eventually we’ll fondly remember Digital Ceiling as yet another Cisco product that had potential and not much else.

The Myth of Chargeback

 

Cash Register

Cash register by the National Cash Register Co., Dayton, Ohio, United States, 1915.

Imagine a world where every aspect of a project gets charged correctly. Where the massive amount of compute time for a given project gets labeled into the proper department and billed correctly. Where resources can be allocated and associated to the projects that need them. It’s an exciting prospect, isn’t it? I’m sure that at least one person out there said “chargeback” when I started mentioning all these lofty ideas. I would have agreed with you before, but I don’t think that chargeback actually exists in today’s IT environment.

Taking Charge

The idea of chargeback is very alluring. It’s been on slide decks for the last few years as a huge benefit to the analytics capabilities in modern converged stacks. By collecting information about the usage of an application or project, you can charge the department using that resource. It’s a bold plan to change IT departments from cost centers to revenue generators.

IT is the red headed stepchild of the organization. IT is necessary for business continuity and function. Nothing today can run without computers, networking, or phones. However, we aren’t a visible part of the business. Much like the plumbers and landscapers around the organization, IT’s job is to make things happen and not be seen. The only time users acknowledge IT is when something goes wrong.

That’s where chargeback comes into play. By charging each department for their usage, IT can seek to ferret out extraneous costs and reduce usage. Perhaps the goal is to end up a footnote in the weekly management meeting where Brian is given recognition for closing a $500,000 deal and IT gets a shout-out for figuring out marketing was using 45% more Exchange server space than the rest of the organization. Sounds exciting, doesn’t it?

In theory, chargeback is a wonderful way to keep departments honest. In practice, no one uses it. I’ve talked to several IT professionals about chargeback. About half of them chuckled when I mentioned it. Their collective experience can best be summarized as “They keep talking about doing that around here but no one’s actually figured it out yet.”

The rest have varying levels of implementation. The most advanced ones that I’ve spoken to use chargeback only for physical assets in a project. If Sales needs a new server and five new laptops for Project Hunter, then those assets are charged back correctly to the department. This keeps Sales from asking for more assets than they need and hoping that the costs can be buried in IT somewhere.

No one that I’ve spoken to is using chargeback for the applications and software in an organization. We can slice the pie as fine as we want for how to allocate assets that you can touch but when it comes to figuring out how to make Operations pay their fair share of the bill for the new CRM application we’re stuck. We can pull all the analytics all day long but we can’t seem to get them matched to the right usage.

Worse yet, politics plays a big role in chargeback. If a department head disagrees with the way their group is being characterized for IT usage, they can go to their superiors and talk about how critical their operation is to the business and how they need to be able to work without the restrictions of being billed for their usage. A memo goes out the next day and suddenly the department vanishes from the records with an admonishment to “let them do their jobs”.

Cloud Charges

The next thing that always comes up is public cloud. Chargeback proponents are waiting for wide-spread adoption of public cloud. That’s because the billing method for cloud is completely democratic. Everyone pays the price no matter what. If an AWS instance is running someone needs to pay for it. If those systems can be isolated to a specific application or department then the chargeback takes care of itself. Everyone is happy in the end. IT gets to avoid blame for not producing and the other departments get their resources.

Of course, the real problem comes when the bills start piling up. Cloud isn’t cheap. It exposes the dirty little secret that sunk-cost hardware has a purpose. When you bill based on CPU hour you’ll find that a lot of systems sit idle. Management will come unglued trying to figure out how cloud costs so much. The commercials and sales pitches said we would save money!

Then the politics start all over again. IT gets blamed because cloud was implemented wrong. No protesting will fix that. Then comes the rapid costs cutting measures. Shutting off systems not in use. Databases lose data capture for down periods. People can access systems in off hours. Work falls off and the cloud project gets scrapped for the old, cheaper way.

Cloud is the model for chargeback that should be used. But it should be noted that we need to remember those numbers need to be correctly attributed. Just pushing a set of usage statistics down without context will lead to finger pointing and scrambling for explanation. Instead, we need to provide context from the outset. Maybe Marketing used an abnormally high amount of IT resources last week. But did it have anything to do with the end of the quarter? Can we track that usage back to higher profits from sales? That context is critical to figuring out how usage statistics affect things overall.


Tom’s Take

Chargeback is the stick that we use to threaten organizations to shape up and fly right. We make plans to implement a process to track all the evil things that are hidden in a department and by the time the project is ready to kick off we find that costs are down and productivity is up. That becomes the new baseline and we go on about our day think about how chargeback would have let us catch it before it became a problem.

In reality, chargeback is a solution that will take time to implement and cost money and time to get right. We need data context and allocation. We need actionable information and the ability to coordinate across departments. We need to know where the charges are coming from and why, not just complaining about bills. And there can be no exceptions. That’s the only way to put chargeback in charge.

 

We Are Number Two!

Checklist

In my old IT life I once took a meeting with a networking company. They were trying to sell me on their hardware and get me to partner with them as a reseller. They were telling me how they were the number two switching vendor in the world by port count. I thought that was a pretty bold claim, especially when I didn’t remember seeing their switches at any of my deployments. When I challenged this assertion, I was told, “Well, we’re really big in Europe.” Before I could stop my mouth from working, I sarcastically replied, “So is David Hasselhoff.” Needless to say, we didn’t take this vendor on as a partner.

I tell this story often when I go to conferences and it gets laughs. As I think more and more about it the thought dawns on me that I have never really met the third best networking vendor in the market. We all know who number one is right now. Cisco has a huge market share and even though it has eroded somewhat in the past few years they still have a comfortable lead on their competitors. The step down into the next tier of vendors is where the conversation starts getting murky.

Who’s Number Two???

If you’ve watched a professional sporting event in the last few years, you’ve probably seen an unknown player in close up followed by an oddly specific statistic. Like Bucky has the most home runs by a left handed batter born in July against pitchers that thought The Matrix should have won an Oscar. When these strange stats are quoted, the subject is almost always the best or second best. While most of these stats are quoted by color announcers trying to fill airtime, it speaks to a larger issue. Especially in networking.

No one wants the third best anything. John Chambers is famous for saying during every keynote, “If Cisco can’t be number one or number two in a market we won’t be in it.” That’s easy to say when you’re the best. But how about the market down from there? Everyone is trying to position themselves as the next best option in some way or another. Number two by port counts. Number two by ports sold (which is somehow a different metric). Number two by units shipped or amount sold or some other way of phrasing things slightly differently in order to the viable alternative.

I don’t see this problem a lot in other areas. Servers have a clear first, second, and third order. Storage has a lot of tiering. Networking, on the other hand, doesn’t have a third best option. Yet there are way more than three companies doing networking. Brocade, HPE, Extreme, Dell, Cumulus Networks, and many more if you want to count wireless companies with switching gear for the purpose of getting into the Magic Quadrant. No one wants to be seen as the next best, next best thing.

How can we fix this? Well, for one thing we need impartial ranking. No more magical polygons and reports that take seventeen pages to say “It depends”. There are too many product categories that you can slice your solution into to be the best. Let’s get back to the easy things. Switches are campus or data center. Routers are campus or service provider. Hardware falls here or there. No unicorns. No single-product categories. If you’re the only product of your kind you are in the wrong place.


Tom’s Take

Once again, I think it’s time for a networking Consumer Reports type of publication. Or maybe something like Storage Review. We need someone to come in and tell vendors that they are the third or fourth best option out there by the following simple metrics. Yes, it’s very probable that said vendors would just ignore the ranking and continue building their own marketing bubble around the idea of being the second best switching platform for orange switches sold to school districts not named after presidents. Or perhaps finding out they are behind the others will spur people inside the company into actually getting better. It’s a faint hope, but hey. The third time’s a charm.