HP Is Buying Aruba. Who’s Next?

HPAruba_Networks_Logo

Sometimes all it takes is a little push. Bloomberg reported yesterday that HP is in talks to buy Aruba Networks for their wireless expertise. The deal is contingent upon some other things, and the article made sure to throw up disclaimers that it could still fall through before next week. But the people that I’ve talked to (who are not authorized to comment and wouldn’t know the official answer anyway) have all said this is a done deal. We’ll likely hear the final official confirmation on Monday afternoon, ahead of Aruba’s big Atmosphere (nee Airheads) conference.

R&D Through M&A

This is a shot in the arm for HP. Their Colubris-based AP lineup has been sorely lacking in current generation wireless technology, let alone next gen potential. The featured 802.11ac APs on their networking site are OEMed directly from Aruba. They’ve been hoping to play the OEM game for a while and see where the chips are going to fall. Buying Aruba gives them second place in the wireless market behind Cisco overnight. It also fixes the most glaring issue with Colubris – R&D. HP hasn’t really been developing their wireless portfolio. Some had even thought it was gone for good. This immediately puts them back in the conversation.

More importantly to HP, this acquisition cuts off many of their competitor’s wireless plans at the knees. Dell, Juniper, Brocade, Alcatel Lucent, and many others OEM from Aruba or have a deep partnership agreement. By wrapping up the entirety of Aruba’s business, HP has dealt a blow to the single-source vendors that are playing in the wireless market. And this is going to lead to some big changes relatively soon.

The Startup Buzz

Dell is perhaps the most impacted by this announcement. A very large portion of their wireless offerings were Aruba. They sold APs, controllers, and even ClearPass through their channels (with the names filed off, of course). Now, they are back to square one. How are they going to handle the most recent deals? What are their support options?

I little thought exercise with my friend Josh Williams (@JSW_EdTech) had a few possibilities:

  1. Dell forces HP to buyout all the support contracts for Dell/Aruba customers. That makes sense for Dell, but it will turn a lot of customers against them, especially when HP lets those customers know the reasons why.
  2. Dell agrees to release the developments they’ve done on the platform to HP in return for HP taking the support business. Quiet and clean. Which is why it likely won’t happen.
  3. Dell pays HP an exorbitant amount of money to take the support contracts. This gives HP the capital to take on all those new support contracts and gives Dell an exit to rebuild. This is probably what HP wants, but could end up sinking the deal.

Dell got burned, plain and simple. They likely could have purchased Aruba months ago and solidified the relationship. Instead, they are now looking for a new partner. However, I don’t think they are going to get burned again. Rather than shopping for a friend, they are going to be shopping for an acquisition. My money has always been on Aerohive. They have an existing relationship. The Aerohive controller-less cloud model fits Dell’s new strategies. And they would be a much cheaper pickup than Aruba. There is precedence for Dell skipping the big name and picking up a smaller company that’s a better fit. It’s a hard pill to swallow, but it gives Dell the chance to move forward with a lasting relationship.

Softwarely Defined

Brocade is a line-of-business partner of Aruba. They’ve only recently gotten involved since Motorola shut down their WLAN business. This is a good sign for them. That means they can exit from their position and not be significantly affected. It does leave them with a quandary of where to go.

The first choice would be to go back to the Motorola relationship, now in the form of Zebra Technologies. Zebra inherited quite a large portion of the WLAN space from Motorola, but they’ve been keeping rather quiet about it. Are they angling to be more of a support organization for existing installs? Or are they waiting for a big splash announcement to get back in the game? Partnering with Brocade would give them that announcement given the elevated profile Brocade has today.

Brocade’s other option would be to go down the SDN road. The plan for a while has been to embrace SDN, OpenFlow, and all things software defined. The natural target for this would be Meru Networks. Meru has been embracing SDN as well as of late. They had a nice event last year showcasing their advances in SDN. Brocade could bolster that SDN knowledge while obtaining a good wireless company that would give them the strength they need to augment their enterprise business.

Permission To Retire

The odd company out is Juniper. I’ve heard that they were involved at first in trying to acquire Aruba, but when you’re betting against HP’s pockets you will lose in the long run. Their other problem is Elliott Management, everyone’s new favorite “activist investor”.

Elliott has made no secret that they see the value in Juniper in the service provider market. As far back as last year, Elliott has been trying to get Juniper to reave off the ancillary businesses, including security, enterprise, and wireless. Juniper has officially ended sales for Trapeze-based products already. Why would Elliott let them buy another wireless company so soon after getting rid of the last one. Even as successful as Aruba is, Elliott would see it as another distraction. And when someone that active is calling the shots, you can’t go against them, lest you end up unemployed.

This is the end for Juniper’s wireless aspirations. That’s not a bad thing, necessarily. This gives them the impetus needed to focus on the service provider market. It also gives them a smaller enterprise switching portfolio to package up and sell off should that pound of flesh be necessary to sate Elliott as well. Time will tell.

Everyone Else

Any other companies with Aruba relationships are either dipping their toes in the wireless waters or don’t care enough to worry about the impact it will have. It will be an easy matter for companies like Alcatel-Lucent to go out and find a new OEM partner, likely with someone like Extreme Networks or Ruckus. Those companies are making great technology and will be happy to supply the APs that customers need. Showing off their technology will also give them great in-roads into customers that might not have been on their radar before.


Tom’s Take

It’s going to be an exciting time in the wireless space. HP’s acquisition is going to start the falling dominoes for other companies to buy into the wireless space as well. When the dust settles, there will be new number twos and number threes in the market. It also clears the middle of the space for up-and-comers to grow. Cisco is going to stay number one for a while, and HP will be number two when this deal closes. But until we see the fallout from who will be purchased and partnered with it’s tough to say who will be a clear winner. But make sure you’ve got your popcorn ready. Because this isn’t over yet. Not by a long shot.

 

Cumulus Networks Could Be The New Microsoft

CumulusMSTurtle

When I was at HP Discover last December, I noticed a few people running around wearing Cumulus Networks shirts. That had me a bit curious, as Cumulus isn’t usually on the best of terms with traditional networking vendors unless they have a partnership. After some digging, I found out that HP would be announcing a “britebox” branded whitebox switch soon running Cumulus Linux. I wrote a post vaguely hinting about this in as much detail as I dared leak out.

No surprise that HP has formally announced their partnership with Cumulus. This is a great win for HP in the long run, as it gives customers the option to work with an up-and-coming network operating system (NOS) along side HP support and hardware. Note that the article mentions a hardware manufacturing deal with Accton, but I wouldn’t at all be surprised to learn that Accton had been making a large portion of their switching line already. Just a different sticker on this box.

Written Once, Runs Everywhere

The real winner here is Cumulus. They have partnered with Dell and HP to bring their NOS to some very popular traditional network vendor hardware. Given that they continue to push Cumulus Linux on traditional whitebox hardware, they are positioning themselves the same way that Microsoft did back in the 1980s when the IBM Clone PC market really started to take off.

Microsoft’s master stroke wasn’t building an empire around a GUI. It was creating software that ran on almost every variation of devices in the market. That common platform provided consistency for programmers the world over. You never had to worry about what OS was running on an IBM Clone. You could be almost certain it was MS-DOS. In fact, that commonality of platform is what enabled Microsoft to build their GUI interface on top. While DOS was eventually phased out in favor of WinNT kernels in Windows the legacy of DOS still remains on the command line.

Hardware comes and goes every year. Even with device vendors that are very tied to their hardware, like Apple. Look at the hardware differences between the first iPhone and the latest iPhone 6+. They are almost totally alien. Then look at the operating system running on each of them. They are remarkably similar, especially amazing given the eight year gap between them. That consistency of experience has allowed app developers to be comfortable writing apps that will work for more than one generation of hardware.

Bash Brothers

Cumulus is positioning themselves to do something very similar. They are creating a universal NOS interface to switches. Rather than pinning their hopes on the aging Cisco IOS CLI (and avoiding a potential lawsuit in the process), Cumulus has decided to go with Bash. Bash is almost universal for those that work on Linux, and if you’re an old school UNIX admin it doesn’t take long to adapt to Bash either. That common platform means that you have a legion of trained engineers and architects that know how to use your system.

More importantly, you have a legion of people that know how to write software to extend your system. You can create Bash scripts and programs to do many things. Cumulus even created ifupdown2 to help network admins with simplifying network interface administration. If you can extend the interface of a networking device with relative ease, you’ve started unlocking the key to unlimited expansion.

Think about the number of appliances you use every day that you never know are running Linux. I said previously that Linux won the server war because it is everywhere now and yet you don’t know it’s Linux. In the same way, I can see Cumulus negotiating to get the software offered as an option for both whitebox and britebox switches in the future. Once that happens, you can start to see the implications. If developers are writing apps and programs to extend Cumulus Linux and not the traditional switch OS, consumers will choose the more extensible option if everything else is equal. That means more demand for Cumulus. Which pours more resources into development. Which is how MS-DOS took over the world and led to Windows domination, while OS/2 died a quiet, protracted death.


Tom’s Take

When I first tweeted my thoughts about Cumulus Networks and their potential rise like the folks in Redmond, there was a lot of pushback. People told me to think of them more like Red Hat instead of Microsoft. While their business model does indeed track more closely with Red Hat, I think much of this pushback comes from the negative connotations we have with Windows. Microsoft has essentially been the only game in the x86 market for a very long time. People forget what it was like to run BeOS or early versions of Slackware. Microsoft had almost total domination outside the hobby market.

Cumulus doesn’t have to unseat Cisco to win. They don’t even have to displace the second or third place vendor. By signing deals with as many people as possible to bring Cumulus Linux to the masses, they will win in the long run by being the foundation for where networking will be going in the future.

Editor Note:  A previous version of this article incorrectly stated that Cumulus created ifupdown, when in fact they created ifupdown2.  Thanks to Matt Stone (@BigMStone) and Todd Craw (@ToddMCraw) for pointing this out to me.

Hypermyopia In The World Of Networking

myopia

The more debate I hear over protocols and product positioning in the networking market today, the more I realize that networking has a very big problem with myopia when it comes to building products. Sometimes that’s good. But when you can’t even see the trees for the bark, let alone the forest, then it’s time to reassess what’s going on.

Way Too Close

Sit down in a bar in Silicon Valley and you’ll hear all kinds of debates about which protocols you should be using in your startup’s project. OpenFlow has its favorite backers. Others say things like Stateless Transport Tunneling (STT) are the way to go. Still others have backed a new favorite draft protocol that’s being fast-tracked at the IETF meetings. The debates go on and on. It ends up looking a lot like this famous video.

But what does this have to do with the product? In the end, do the users really care which transport protocol you used? Is the forward table population mechanism of critical importance to them? Or are they more concerned with how the system works? How easy it is to install? How effective it is at letting them do their jobs?

The hypermyopia problem makes the architecture and engineering people on these projects focus on the wrong set of issues. They think that an elegant and perfect solution to a simple technical problem will be the magical panacea that will sell their product by the truckload. They focus on such minute sets of challenges that they often block out the real problems that their product is going to face.

Think back to IBM in the early days of the Internet. Does anyone remember Blue Lightning? How about the even older MCA Bus? I bet if I said OS/2 I’d get someone’s attention. These were all products that IBM put out that were superior to their counterparts in many ways. Faster speeds, better software architecture, and even revolutionary ideas about peripheral connection. And yet all of them failed miserably in one way or another. Was it because they weren’t technically complete? Or was it because IBM had a notorious problem with marketing and execution when it came to non-mainframe computing?

Take A Step Back

Every writer in technology uses Apple as a comparison company at some point. In this case, you should take a look at their simplicity. What protocol does FaceTime use? Is it SIP? Or H.264? Does it even matter? FaceTime works. Users like that it works. They don’t want to worry about traversing firewalls or having supernodes available. They don’t want to fiddle with settings and tweak timers to make a video call work.

Enterprise customers are very similar. Think about WAN technologies for a moment. Entire careers have been built around finding easy ways to connect multiple sites together. We debate Frame Relay versus ATM. Should we use MPLS? What routing protocol should we use? The debates go on and on. Yet the customer wants connectivity, plain and simple.

At the recent Networking Field Day 9, two companies that specialize in software defined WAN (SD-WAN) had a chance to present. Velocloud and CloudGenix both showcased their methods for creating WANs with very little need for user configuration. The delegates were impressed that the respective company’s technologies “just worked”. No tuning timers. No titanic arguments about MPLS. Just simple wizards and easy configuration.

That’s what enterprise technology should be. It shouldn’t involve a need to get so close to the technology that you lose the big picture. It shouldn’t be a series of debates about which small technology choice to make. It should just work. Users that spend less time implementing technology spend more time using it. They spend more time happy with it. And they’re more likely to buy from you again.


Tom’s Take

If I hear one more person arguing the merits of their technology favorite again, I may throw up. Every time someone comes to me and tells me that I should bet on their horse in the race because it is better or faster or more elegant, I want to grab them by the shoulders and shake some sense into them. People don’t buy complicated things. People hate elegant but overly difficult systems. They just want things to work at the end of the day. They want to put as little thought into a system as they can to maximize the return they get from it. If product managers spent the next iteration of design focusing on ease-of-use instead of picking the perfect tunneling protocol, I think they would see a huge return on their investment in the long run. And that’s something you can see no matter how close you are.

 

The Packet Flow Duality

young-double-slit-diffraction-wikipedia-660x330

Quantum physics is a funny thing. It seeks to solve all the problems in the physical world by breaking everything down into the most basic unit possible. That works for a lot of the observable universe. But when it comes to light, quantum physics has issues. Thanks to experiments and observations, most scientists understand that light isn’t just a wave and it’s not just a collection of particles either. It’s both. This concept is fundamental to understanding how light behaves. But can it also explain how data behaves?

Moving Things Around

We tend to think about data as a series of discrete data units being pushed along a path. While these units might be frames, packets, or datagrams depending on the layer of the OSI model that you are operating at, the result is still the same. A single unit is evaluated for transmission. A brilliant post from Greg Ferro (@EtherealMind) sums up the forwarding thusly:

  • Frames being forwarded by MAC address lookup occur at layer 2 (switching)
  • Packets being forwarded by IP address lookup occur at layer 3 (routing)
  • Data being forwarded at higher levels is a stream of packets (flow forwarding)

It’s simple when you think about it. But what makes it a much deeper idea is that lookup at layer 2 and 3 requires a lot more processing. Each of the packets must be evaluated to be properly forwarded. The forwarding device doesn’t assume that the destination is the same for a group of similar packets. Each one must be evaluated to ensure it arrives at the proper location. By focusing on the discrete nature of the data, we are forced to expend a significant amount of energy to make sense of it. As anyone that studied basic packet switching can tell you, several tricks were invented to speed up this process. Anyone remember store-and-forward versus cut-through switching?

Flows behave differently. They contain state. They have information that helps devices make intelligent forwarding decisions. Those decisions don’t have to be limited by destination MAC or IP addresses. They can be labels or VLANs or other pieces of identifying information. They can be anything an application uses to talk to another device, like a DNS entry. It allows us to make a single forwarding decision per flow and implement it quickly and efficiently. Think about a stateful firewall. It works because the information for a given packet stream (or flow) can be programmed into the device. The firewall is no longer examining every individual packet, but instead evaluates the entire group of packets when making decisions.

Consequently, stageful firewalls also give us a peek at how flows are processed. Rather than having a CAM table or an ARP table, we have a group of rules and policies. Those policies can say “given a group of packets in a flow matching these characteristics, execute the following actions”. That’s a far cry from trying to figure out where each one goes.

It’s All About Scale

A single drop of water is discrete. Just like a single data packet, it represents an atomic unit of water. Taken in this measurement, a single drop of water does little good. It’s only when those drops start to form together that their usefulness becomes apparent. Think of a river or a firehose. Those groups of droplets have a vector. They can be directed somewhere to accomplish something, like putting out a fire or cutting a channel across the land.

Flows should be the atomic unit that we base our networking decisions upon. Flows don’t require complex processing on a per-unit basis. Flows carry additional information above and beyond a 48-bit hex address or a binary address representing an IP entry. Flows can be manipulated and programmed. They can have policies applied. Flows can scale to great heights. Packets and frames are forever hampered by the behaviors necessary to deliver them to the proper locations.

Data is simultaneously a packet and a flow. We can’t separate the two. What we can do is change our frame of reference for operations. Just like experiments with light, we must choose one aspect of the duality to act until such time as the other aspect is needed. Light can be treated like a wave the majority of the time. It’s only when things like the photoelectric effect happen that our reference must change. In the same way, data should be treated like a flow for the majority of cases. Only when the very basic needs of packet/frame/datagram forwarding are needed should we abandon our flow focus and treat it as a group of discrete packets.


Tom’s Take

The idea of data flows isn’t new. And neither is treating flows as the primary form of forwarding. That’s what OpenFlow has been doing for quite a while now. What makes this exciting is when people with new networking ideas start using the flow as an atomic unit for decisions. When you remove the need to do packet-by-packet forwarding and instead focus on the flow, you gain a huge insight into the world around the packet. It’s not much a stretch to think that the future of networking isn’t as concerned with the switching of frames or routing of packets. Instead, it’s the forwarding of a flow of packets that will be exciting to watch. As long as you remember that data can be both packet and flow you will have taken your first step into a larger world of understanding.

 

NBase-ing Your Wireless Decisions

Cat5

Copper is heavy. I’m not talking about it’s atomic weight of 63 or the fact that bundles of it can sag ceiling joists. I’m talking about the fact that copper has inertia. It’s difficult to install and even more difficult to replace. Significant expense is incurred when people want to run new lines through a building. I never really understood how expensive a proposition that was until I went to work for a company that run copper lines.

Out of Mind, Out of Sight

According to a presentation that we saw at Tech Field Day Extra at Cisco Live Milan from Peter Jones at Cisco, Category 5e and 6 UTP cabling still has a significant install base in today’s organizations. That makes sense when you consider that 5e and 6 are the minimum for gigabit Ethernet. Once we hit the 1k mark with speeds, desktop bandwidth never really increased. Ten gigabit UTP Ethernet is never going to take off outside the data center. The current limitations of 10Gig over Cat 6 makes it impossible to use in a desktop connectivity situation. With a practical limit of around 50 meters, you practically have to be on top of the IDF closet to get the best speeds.

There’s another reason why desktop connectivity stalled at 1Gig. Very little data today gets transferred back and forth between desktops across the network. With the exception of some video editing or graphics work, most data is edited in place today. Rather than bringing all the data down to a desktop to make changes or edits, the data is kept in a cloud environment or on servers with ample fast storage space. The desktop computer is merely a portal to the environment instead of the massive editing workstation of the past. If you even still have a desktop at all.

The vast majority of users today don’t care how fast the wire coming out of the wall is. They care more about the speed of the wireless in the building. The shift to mobile computing – laptops, tablets, and even phones, has spurred people to spend as little time as possible anchored to a desk. Even those that want to use large monitors or docking stations with lots of peripherals prefer to connect via wireless to grab things and go to meetings or off-site jobs.

Ethernet has gone from a “must have” to an infrastructure service supporting wireless access points. Where one user in the past could have been comfortable with a single gigabit cable, that new cable is supporting tens of users via an access point. With sub-gigabit technologies like 802.11n and 802.11ac Wave 1, the need for faster connectivity is moot. Users will hit overhead caps in the protocol long before they bump into the theoretical limit for a single copper wire. But with 802.11ac Wave 2 quickly coming up on the horizon and even faster technologies being cooked up, the need for faster connectivity is no longer a pipe dream.

All Your NBase

Peter Jones is the chairman of the NBaseT Alliance. The purpose of this group is to decide on a standard for 2.5 gigabit Ethernet. Why such an odd number? Long story short: It has to do with splitting 10 gigabit PHY connections in fourths and delivering that along a single Cat 5e/6 wire. That means it can be used with existing cable plants. It means that we can deliver more power along the wire to an access point that can’t run on 802.3af power and needs 802.3at (or more). It means we don’t have to rip and replace cable plants today and incur double the costs for new technology.

NBaseT represents a good solution. By changing modulations and pumping Cat 5e and 6 to their limits, we can forestall a cable plant armageddon. IT departments don’t want to hear that more cables are needed. They’ve spent the past 5 years in a tug-of-war between people saying you need 3–4 drops per user and the faction claiming that wireless is going to change all that. The wireless faction won that argument, as this video from last year’s Aruba Airheads conference shows. The idea of totally wireless office building used to be a fantasy. Now it’s being done by a few and strongly considered by many more.

NBaseT isn’t a final solution. The driver for 2.5 Gig Ethernet isn’t going to survive the current generation of technology. Beyond 802.11ac, wireless will jump to 10 Gigabit speeds to support primary connectivity from bandwidth hungry users. Copper cabling will need to be updated to support this, as fiber can’t deliver power and is much too fragile to support some of the deployment scenarios that I’ve seen. NBaseT will get us to the exhaustion point of our current cable plants. When the successor to 802.11ac is finally ratified and enters general production, it will be time for IT departments to make the decision to rip out their old cable infrastructure and replace it with fewer wires designed to support wireless deployments, not wired users.


Tom’s Take

Peter’s talk at Tech Field Day Extra was enlightening. I’m not a fan of the proposed 25Gig Ethernet spec. I don’t see the need it’s addressing. I can see the need for 2.5Gig on the other hand. I just don’t see the future. If we can keep the cable plant going for just a couple more years, we can spend that money on better wireless coverage for our users until the next wave is ready to take us to 10Gig and beyond. Users know what 1Gig connectivity feels like, especially if they are forced down to 100Mbps or below. NBaseT gives them the ability to keep those fast speeds in 802.11ac Wave 2. Adopting this technology has benefits for the foreseeable future. At least until it’s time to move to the next best thing.

More Bang For Your Budget With Whitebox

white-box-sdn-nfv

As whitebox switching starts coming to the forefront of the next buying cycle for enterprises, decision makers are naturally wondering about the advantages of buying cheaper hardware. Is a whitebox switch going to provide more value for me than buying something from an established vendor? Where are the real savings? Is whitebox really for me? One of the answers to this puzzle comes not from the savings in whitebox purchases, but the capability inherent in rapid deployment.

Ten Thousand Spoons

When users are looking at the acquisition cost advantages of buying whitebox switches, they typically don’t see what they would like to see. Ridiculously cheap hardware isn’t the norm. Instead, you see a switch that can be bought for a decent discount. That does take into account that most vendors will give substantial one-time discounts to customers to entice them into more lucrative options like advanced support or professional services.

The purchasing advantage of whitebox doesn’t just come from reduced costs. It comes from additional unit purchases. Purchasing budgets don’t typically spell out that you are allowed to buy ten switches and three firewalls. They more often state that you are allowed to spend a certain dollar amount on devices of a specific type. Savvy shoppers will find deals or discounts to get more for their dollar. The real world of purchasing budgets means that every dollar will be spent, lest the available dollars get reduced next year.

With whitebox, that purchasing power translates into additional units for the same budget amount. If I could buy three switches from Vendor X or five switches from Whitebox Vendor Y, ceteris paribus I would buy the whitebox switches. If the purpose of the purchase was to connect 144 ports, then that means I have two extra switches lying around. Which does seem a bit wasteful.

However, the option of having spares on the shelf becomes very appealing. Networks are supposed to be built in a way to minimize or eliminate downtime because of failure. The network must continue to run if a switch dies. But what happens to the dead switch? In most current cases, the switch must be sent in for warranty replacement. Services contracts with large networking vendors give you the option for 4-hour, overnight, or next business day replacements. These vendors will even cross-ship you the part. But you are still down the dead switch. If the other part of the redundant pair goes down, you are going to be dead in the water.

With an extra whitebox switch on the shelf you can have a ready replacement. Just slip it into place and let your orchestration and provisioning software do the rest. While the replacement is shipping, you still have redundancy. It also saves you from needing to buy a hugely expensive (and wildly profitable) advanced support contract.

All You Need Is A Knife

Suppose for a moment that we do have these switches sitting around on a shelf doing nothing but waiting for the inevitable failure in the network. From a cost perspective, it’s neutral. I spent the same budget either way, so an unutilized switch is costing me nothing. However, what if I could do something with that switch?

The real advantage of whitebox in this scenario comes from the ability to use non-switching OSes on the hardware. Think for a moment about something like a network packet monitor. In the past, we’ve needed to download specialized software and slip a probing device into the network just for the purposes of packet collection. What if that could be done by a switch? What if the same hardware that is forwarding packets through the network could also be used to monitor them as well?

Imagine creating an operating system that runs on top of something like ONIE for the purpose of being a network tap. Now, instead of specialized hardware for that purpose you only need to go and use one of the switches you have lying around on the shelf and repurpose it into a sensor. And when it’s served that purpose, you put it back on the shelf and wait until there is a failure before going back to push it into production as a replacement. With Chef or Puppet, you could even have the switch boot into a sensor identity for a few days and then provision it back to being a data forwarding switch afterwards. No need for messy complicated software images or clever hacks.

Now, extend those ideas beyond sensors. Think about generic hardware that could be repurposed for any function. A switch could boot up as an inline firewall. That firewall could be repurposed into a load balancer for the end of the quarter. It could then become a passive IDS during an attack. All without moving. The only limitation is the imagination of the people writing code for the device. It may not ever top the performance of a device running purely for the purpose of a given function, but the flexibility of having a device that can serve multiple functions without massive reconfiguration would win out in the long run for many applications. Flexibility is more key than overwhelming performance.


Tom’s Take

Whitebox is still finding a purpose in the enterprise. It’s been embraced by webscale, but the value to the enterprise is not found in massive capabilities like that. Instead, the additional purchasing power that can be derived from additional unit purchases for the same dollar amount leads to reduced support contract costs and even new functionality increases from existing hardware lying around that can be made to do so many other things. Who could have imagined that a simple switch could be made to do the job of many other purpose-built devices in the data center? Isn’t it ironic, don’t you think?

 

Making Your Wireless Guest Friendly

Wireless

During the recent Virtualization Field Day 4, I was located at a vendor building and jumped on their guest wireless network. There are a few things that I need to get accomplished before the magic happens at a Tech Field Day event, so I’m always on the guest network quickly. It’s only after I take care of a few website related items that I settle down into a routine of catching up on email and other items. That’s when I discovered that this particular location blocked access to IMAP on their guest network. My mail client stalled out when trying to fetch messages and clear my outbox. I could log into Gmail just fine and send and receive while I was on-site. But my workflow depends on my mail client. That made me think about guest WiFi and usability.

Be Our (Limited) Guest

Guest WiFi is a huge deal for visitors to an office. We live in a society where ever-present connectivity is necessary. Email notifications, social media updates, and the capability to look up necessary information instantly have pervaded our lives. For those of us fortunate enough to still have an unlimited cellular data plan, our connectivity craving can be satisfied by good 3G/LTE coverage. But for those devices lacking a cellular modem, or the bandwidth to exercise it, we’re forced to relay of good old 802.11a/b/g/n/ac to get online.

Most companies have moved toward a model of providing guest connectivity for visitors. This is far cry from years ago when snaking an Ethernet cable across the conference room was necessary. I can still remember the “best practice” of disabling the passthrough port on a conference room IP phone to prevent people from piggybacking onto it. Our formerly restrictive connectivity model has improved drastically. But while we can get connected, there are still some things that we limit through software.

Guest network restrictions are nothing new. Many guest networks block malicious traffic or traffic generally deemed “unwanted” in a corporate environment, such as Bittorrent or peer-to-peer file sharing protocols. Other companies take this a step further and start filtering out bandwidth consumers and the site associated with them, such as streaming Internet radio and streaming video, like YouTube and Vimeo. It’s not crucial to work (unless you need your cat videos) and most people just accept it and move on.

The third category happens, for the most part, at large companies or institutions. Protocols are blocked that might provide covert communications channels. IMAP is a good example. The popular thought is that by blocking access to mail clients, guests cannot exfiltrate data through that communications channel. Forcing users onto webmail gives the organization an extra line of defense through web filters and data loss prevention (DLP) devices that constantly look for data leakage. Another protocol that is added in this category is IPSec or SSL VPN connections. In these restrictive environments, any VPN use is generally blocked and discouraged.

Overstaying Your Welcome

Should companies police guest wireless networks for things like mail and VPN clients? That depends on what you think the purpose of a guest wireless network is for. For people like me, guest wireless is critical to the operation of my business. I need access to websites and email and occasionally things like SSH. I can only accomplish my job if I have connectivity. My preference would be to have a guest network as open as possible to my needs.

Companies, on the other hand, generally look at guest wireless connectivity as a convenience provided to guests. It’s more like the phone in the lobby by the reception desk. In most cases, that phone has very restricted dialing options. In some companies, it can only dial internal extensions or a central switchboard. In others, it has some capability to dial local numbers. Almost no one gives that phone the ability to dial long distance or international calls. To the company, giving wireless connectivity to guests serves the purpose of giving them web browsing access. Anything more is unnecessary, right?

It’s a classic standoff. How do we give the users the connectivity they need while protecting the network? Some companies create a totally alien guest network with no access to the inside and route all traffic through it. That’s almost a requirement to avoid unnecessary regulatory issues. Others use a separate WAN connection to avoid having the guest network potentially cause congestion with the company’s primary connection.

The answers to this conundrum aren’t going to come easily. But regardless of this users need to know what works and what doesn’t. Companies need to be protected against guest users doing things they aren’t supposed to. How can we meet in the middle?

A Heaping Helping of Our Hospitality

The answer lies in the hospitality industry. Specifically in those organizations that offer tiered access for their customers. Most hotels will give you the option of a free or reduced rate connection that is rate limited or has blocks in place. You can upgrade to the premium tier and unlock a faster data cap and access to things like VPN connections or even public addresses for things like video conferencing. It’s a two-tier plan that works well for the users.

Corporate wireless should follow the same plan. Users can be notified that their basic connectivity has access to web browsing and other essential items, perhaps at a rate limit to protect the corporate network. For those users (like me) that need access to faster network speeds or uncommon protocols like IMAP, you could setup a “premium” guest network that has more restrictive terms of use and perhaps gathers more information about the user before allowing them onto the network. This is also a good solution for vendors or contractors that need access to more of the network that a simple guest solution can afford them. They can use the premium tier with more restrictions and the knowledge that they will be contacted in the event of data exfiltration. You could even monitor this connection more stringently to insure nothing malicious is going on.


Tom’s Take

Guest wireless access is always going be an exercise is balance. You need to give your guests all the access you can without giving them the keys to the kingdom. Companies providing guest access need to adopt a tiered model like that of the hospitality industry to provide the connectivity needed for power users while still offering solutions that work for the majority of visitors. At the very least, companies need to notify users on the splash page / captive portal which services are disabled. This is the best way to let your guests know what’s in store for them.