Are We Living In A Culture Of Beta?

Cisco released a new wireless LAN controller last week, the 5760.  Blake and Sam have a great discussion about it over at the NSA Show.  It’s the next generation of connection speeds and AP support.  It also runs a new version of the WLAN controller code that unifies development with the IOS code team.  That last point generated a bit of conversation between wireless rock stars Scott Stapleton (@scottpstapleton) and George Stefanick (@wirelesssguru) earlier this week.  In particular, a couple of tweets stood out to me:

Overall, the amount of features missing from this new IOS-style code release is bordering on the point of concern.  I understand that porting code to a new development base is never easy.  Being a fan of video games, I’ve had to endure the pain of watching features be removed because they needed to be recoded the “right way” in a code base instead of being hacked together.  Cisco isn’t the only culprit in this whole mess.  Software quality has been going downhill for quite a while now.

Our culture is living in a perpetual state of beta testing.  There’s lot of blame to go around on this.  We as consumers and users want cutting edge technology.  We’re willing to sacrifice things like stability or usability for a little peak at future awesomeness.  Companies are rushing to be the first-to-market on new technologies.  Being the first at anything is an edge when it comes to marketing and, now, patent litigation.  Producers just want to ship stuff.  They don’t really care if it’s finished or not.

Stability can be patched.  Bugs can be coded out in the next release.  What’s important is that we hit our release date.  Who cares if it’s an unrealistic arbitrary day on the calendar picked by the marketing launch team?  We have to be ready otherwise Vendor B will have their widget out and our investors will get mad and sell off the stock!  The users will all leave us for the Next Big Thing and we’ll go out of business!!!  

Okay, maybe not every conversation goes like that, but you can see the reasoning behind it.

Google is probably the worst offender of the bunch here.  How long was GMail in beta?  As it turns out…five years.  I think they probably worked out most of the bugs of getting electronic communications from one location to another after the first nine months or so.   Why keep it in beta for so long?  I think it was a combination of laziness and legality.  Google didn’t really want to support GMail beyond cursory forum discussion or basic troubleshooting steps.  By keeping it “beta” for so long, they could always fall back to the excuse that it wasn’t quite finished so it wasn’t supposed to be in production.  That also protected them from the early adopters that moved their entire enterprise mail system into GMail.  If you lost messages it wasn’t a big deal to Google.  After all, it’s still in beta, right?  Google’s reasoning for finally dropping the beta tag after five years was that it didn’t fit the enterprise model that Google was going after.  Turns out that the risk analysts really didn’t like having all their critical communication infrastructure running through a project with a “beta” tag on it, even if GMail had ceased being beta years before.

Software companies thrive off of getting code into consumer’s hands.  Because we’ve effectively become an unpaid quality assurance (QA) platform for them.  Apple beta code for iOS gets leaked onto the web hours after it’s posted to the developer site.  There’s even a cottage industry of sites that will upload your device’s UDID to a developer account so you can use the beta code.  You actually pay money to someone for the right to use code that will be released for free in a few months time.  In essence, you are paying money for a free product in order to find out how broken it is.  Silly, isn’t it?  Think about Microsoft.  They’ve started offering free Developer Preview versions of new Windows releases to the public.  In previous iterations, the hardy beta testers of yore would get a free license for the new version as a way of saying thanks for enduring a long string of incremental builds and constant reloading of the OS only to hit a work-stopping bug that erased your critical data. Nowadays, MS releases those buggy builds with a new name and people happily download them and use them on their hardware with no promise of any compensation.  Who cares if it breaks things?  People will complain about it and it will get fixed.  No fuss, no muss.  How many times have your heard someone say “Don’t install a new version of Windows until the first service pack comes out”?  It’s become such a huge deal that MS never even released a Service Pack for Windows 7, just an update rollup.  Even Cisco’s flagship NX-OS on the Nexus 7000 series switches has been accused of being a beta in progress by bloggers such as Greg Ferro (@etherealmind) in this Network Computing article (comment replies).  If the core of our data center is running on buggy unreliable code, what hope have we for the desktop OS or mobile platform?

That’s not to say that every company rushes products out the door.  Two of the most stalwart defenders of full proper releases are Blizzard and Valve.  Blizzard is notorious for letting release dates slip in order to ensure code quality.  Diablo 2 was delayed several times between the original projected date of December 1998 and its eventual release in 2000 and went on to become one of the best selling computer games of all time.  Missing an unrealistic milestone didn’t hurt them one bit.  Valve has one of the most famous release strategies in recent memory.  Every time someone asks found Gabe Newell when Valve will release their next big title, his response is almost always the same – “When it’s done.”  Their apparent hesitance to ship unfinished software hasn’t run them out of business yet.  By most accounts, they are one of the most respected and successful software companies out there.  Just goes to show that you don’t have to be a slave to a release date to make it big.

Tom’s Take

The culture of beta is something I’m all too familiar with.  My iDevices run beta code most of the time.  My laptop runs developer preview software quite often.  I’m always clamoring for the newest nightly build or engineering special.  I’ve mellowed a bit over the years as my needs have gone from bleeding edge functionality to rock solid stability.  I still jump the gun from time to time and break things in the name of being the first kid on my block to play with something new.  However, I often find that when the final stable release comes out to much fanfare in the press, I’m disappointed.  After all, I’ve already been using this stuff for months.  All you did was make it stable?  Therein lies the rub in the whole process.  I’ve survived months of buggy builds, bad battery life, and driver incompatibility only to see the software finally pushed out the door and hear my mom or my wife complain that it changed the fonts on an application or the maps look funny now.  I want to scream and shout and complain that my pain was more than you could imagine.  That’s when I usually realize what’s really going on.  I’m an unpaid employee fixing problems that should never even be in the build in the first place.  I’ve joked before about software release names, but it’s sadly more true than funny.  We spend too much time troubleshooting prerelease software.  Sometimes the trouble is of our own doing.  Other times it’s because the company has outsourced or fired their whole QA department.  In the end, my productivity is wasted fixing problems I should never see.  All because our culture now seems to care more about how shiny something is and less about how well it works.

BYOD vs MDM – Who Pays The Bill?

Generic Mobile Devices

There’s a lot of talk around now about the trend of people bringing in their own laptops and tablets and other devices to access data and do their jobs.  While most of you (including me) call this Bring Your Own Device (BYoD), I’ve been hearing a lot of talk recently about a different aspect of controlling mobile devices.  Many of my customers have been asking me about Mobile Device Management (MDM).  MDM is getting mixed into a lot of conversations about controlling the BYoD explosion.

Mobile Device Management (MDM) refers to the process of controlling the capabilities of a device via a centralized control point, whether it be in the cloud or on premises.  MDM can restrict functions of a device, such as the camera or the ability to install applications.  It can also restrict which data can be downloaded and saved onto a device.  MDM also allows device managers to remotely lock the device in the event that it is lost or even remotely wipe the device should recovery be impossible.  Vendors are now pushing MDM is a big component of their mobility offerings.  Every week, it seems like some new vendor is pushing their MDM offering, whether it be a managed service software company, a wireless access point vendor, or even a dedicated MDM provider.  MDM is being pushed as the solution to all your mobility pain points.  There’s one issue though.

MDM is a very intrusive solution for mobile devices.  A good analogy might be the rules you have for your kids at home.  There are many things they are and aren’t allowed to do.  If they break the rules, there are consequences and possible punishments.  Your kids have to follow your rules if they live under your roof.  Such is the way for MDM as well.  Most MDM vendors that I’ve spoken to in the last three months take varying degrees of intrusion to the devices.  One Windows Mobile provider started their deployment process with a total device wipe before loading an approved image onto the mobile device.  Others require you to trust specific certificates or enroll in special services.  If you run Apple’s iOS and designate the device as a managed device in iOS 6 to get access to certain new features like the global proxy setting, you’ll end up having a wiped device before you can manage it.  Services like MobileIron can even give administrators the ability to read any information on the device, regardless of whether it’s personal or not.

That level of integration into a device is just too much for many people bringing their personal devices into a work environment.  They just want to be able to check their email from their phone.  They don’t want a sneaky admin reading their text messages or even wiping their entire phone via a misconfigured policy setting or a mistaken device loss.  Could you image losing all your pictures or your bank account info because Exchange had a hiccup?  And what about pushing MDM polices down to disable your camera due to company policy or disable your ability to make in-app purchases from your app repository of choice?  How about setting a global proxy server so you are restricted from browsing questionable material from the comfort of your own home?  If you’re like me, any of those choices make me cringe a little.

That’s why BYoD polices are important.  They function more like having your neighbor’s children over at your house.  While you may have rules for your children, the neighbor’s kids are just vistors.  You can’t really punish them like you’d punish your own kids.  Instead, you make what rules you can to prevent them from doing things they aren’t supposed to do.  In many cases, you can send the neighbor’s kids to a room with your own kids to limit the damage they can cause.  This is very much in line with the way we treat devices with BYoD settings.  We try to authenticate users to ensure they are supposed to be accessing data on our network.  We place data behind access lists that try to determine location or device type.  We use the network as the tool to limit access to data as opposed to intruding on the device.

Both BYoD and MDM are needed in a corporate environment to some degree. The key to figuring out which needs to be applied where can be boiled down to one easy question:

Who paid for your device?

If the user bought their device, you need to be exploring BYoD polices as your primary method of securing the network and enabling access.  Unless you have a very clearly defined policy in place for device access, you can’t just assume you have the right to disable half a user’s device functions and then wipe it whenever you feel the need.  Instead, you need to focus your efforts on setting up rules that they should follow and containing their access to your data with access lists and user authentication.  On the other hand, if the company paid for your tablet then MDM is the likely solution in mind.  Since the device belongs to the corporation, they are will within their rights to do what they would like with it.  Use it just like you would a corporate laptop or an issued Blackberry instead of a personal iPhone.  Don’t be shocked if it gets wiped or random features get turned off due to company policy.

Tom’s Take

When it’s time to decide how best to manage your devices, make sure to pull out all those old credit card receipts.  If you want to enable MDM on all your corporate phones and tablets, be sure to check out http://enterpriseios.com/ for a list of all the features supported in a given MDM provider for both iOS and other OSes like Android or Blackberry.  If you didn’t get the bill for that tablet, then you probably want to get in touch with your wireless or network vendor to start exploring the options available for things like 802.1X authentication or captive portal access.  In particular, I like some of the solutions available from Aerohive and Aruba’s ClearPass.  You’re going to want both MDM and BYoD policies in your environment to be sure your devices are as useful as possible while still being safe and protecting your network.  Just remember to back it all up with a very clear, detailed written use policy to ensure there aren’t any legal ramifications down the road from a wiped device or a lost phone causing a network penetration.  That’s one bill you can do without.

Cisco To Buy Meraki?

If you’re in the tech industry, it never seems like there’s any downtime. That was the case today all thanks to my friend Greg Ferro (@etherealmind). I was having breakfast when this suddenly scrolled up on my Twitter feed:

After I finished spitting out my coffee, I started searching for confirmation or indication to the contrary. Stephen Foskett (@SFoskett) provided it a few minutes later by finding the following link:

http://blogs.cisco.com/news/cisco-announces-intent-to-acquire-meraki/

EDIT: As noted in the comments below, Brandon Bennett (@brandonrbennett) found a copy of the page in Google’s Webcache. The company in the linked page says “Madras”, but the rest of the info is all about Meraki. I’m thinking Madras is just a placeholder.

For the moment, I’m going to assume that this is a legitimate link that is really going to point to something soon. I’m not going to assume Cisco has a habit of creating “Cisco announces intent to acquire X Company” pages out of habit, like this famous Dana Carvey SNL video. In that case, the biggest question now becomes…

Why Meraki?

I’ll admit, I was shaking my head for a bit on this one. Cisco doesn’t buy companies because of hardware technology. They’ve got R&D labs that can replicate pretty much anything under the sun given enough time. Cisco instead usually purchases for innovative software platforms. They originally bought Airespace for the controller architecture and managment software that originally became WCS. The silicon isn’t as important, since Cisco makes their own.

Meraki doesn’t really make anything innovative from a hardware front. Their APs use reference architecture. Their switch and firewall offerings are also pretty standard fare with basic 10/100/1000 connectivity and are likely based on Broadcom reference designs as well. What exactly draws in a large buyer like Cisco? What is unique among all those products?

Cisco’s Got Its Head In The Clouds

The single thing that is similar across the whole Meraki line is the software. I talked a bit about it in my Wireless Field Day 2 post on Meraki. Their single management platform allows them to manage switches, firewalls, and wireless in one single application. You can see all the critical information that your switches are pumping out and program them accordingly. The demo I saw at WFD2 was isolating a hungry user downloading too much data with a combination of user identification and pushing an ACL down to that user limiting their bandwidth for certain kinds of traffic without totally locking that person out of the network. That’s the kind of thing that Cisco is looking for.

With the announcement of onePK, Cisco really wants to show off what they can do when they start plugging APIs into their switches and routers. But simply opening an API doesn’t do anything. You’ve got to have some kind of software program to collect data from the API and then push instructions back down to it to accomplish a goal. And if you can decentralize that control to somewhere in the cloud, you’ve got a recipe for the marketing people to salivate over. For now, I thought that would be some kind of application borne out of the Cisco Prime family.

If the Meraki acquisition comes to fruition, Meraki’s platform will likely be rebranded as a member of the Cisco Prime family and used for this purpose. It will likely be positioned initially towards the SMB and medium enterprise customers. In fact, I’ve got three or four use cases for this management software on Cisco hardware today with my customers. This would do a great job of replacing some of the terrible management platforms I’ve seen in the past, like Cisco Configuration Assisstant (CCA) and the unmentioned product Cisco was pitching as a hands-off way to manage sub 50-node networks. By allowing the Meraki management software to capture data from Cisco devices, you can have a proven portal to manage your switches and APs. Add in the ability to manage other SMB devices, such as a UC 500 or a small 800-series router and you’ve got a smooth package you can sell to your customers for a yearly fee. Ah ha! Recurring, cloud based income! That’s just icing on the cake.

EDIT: 6:48 CST – Confirmed by a Cisco press release and as well by Techcrunch and CRN.


Tom’s Take

Ruckus just had their IPO. It was time for a shake up in the upstart wireless market. Meraki was the target that most people had in mind. I’d been asked by several traditional networking vendors recently who I thought was going to be the next wireless company to be acquired, and every time my money landed on Meraki. They have a good software platform that helps them manage inexpensive devices. All their engineering goes into the software. By moving away from pure wireless products, they’ve raised their profile with their competitors. I never seriously expected Meraki to dethrone Cisco or Brocade with their switch offerings. Instead, I saw the Meraki switches and firewalls as an add-on offering to compliment their wireless deployments. You could have a whole small office running Meraki wireless, wired, and security deployments. Getting the ability to manage all those devices easily from one web-based application must have appealed to someone at Cisco M&A. I remember from my last visit to the Meraki offices that their name is an untranslatable word from Greek that means “to do something with intense passion.” It also can mean “to have a place at the table.” It does appear that Meraki found a place at a very big table indeed.

Death to 2.4GHz!

This week was the annual announcement of the Apple iPhone refresh.  There were a lot of interesting technologies discussed around the newest entry in the Cupertino Fruit and Mobile Company but one of the most exciting came in the form of the wireless chip.  The original iPhone and the iPhone 3G were 802.11b/g devices only.  Starting with the iPhone 3GS, Apple upgraded that chip to 802.11b/g/n.  With the announcement of the new iPhone 5, Apple has finally provided an 802.11a/n radio as well, matching the 5GHz capability as the iPad.  This means that all Apple devices can now support 5GHz wireless access points.  Along with the latest Android devices that offer similar support, I think the time has come to make a radical, yet needed, design decision in our wireless networks.

It’s time to abandon 2.4GHz and concentrate on 5GHz.

Matthew Gast from Aerohive had a great blog post along these same lines.  As Matthew explains, the 2.4GHz spectrum is awash with interference sources from every angle.  Microwave ovens, cordless telephones, and wireless video cameras are only part of the problem.  There are only three non-overlapping channels in 2.4GHz.  That means you’ve got a 33% chance of interfering with surrounding devices.  If you’ve got one of those fancy consumer devices that can do channel aggregation at 2.4GHz, the available channels decrease even further.  Add in the fact that most people are carrying devices now that are capable of acting as access point, such as MiFi hotspots or the built-in hotspot features in tablets and smartphones and you can see how the 2.4GHz spectrum is a crowded place indeed.  On the other hand, 5GHz has twenty three non-overlapping channels available.  That’s more than enough to satisfy the more dense AP deployments required to provide similar coverage patterns while at the same time providing for high speed throughput with channel aggregation.

There are a number of devices that are 2.4GHz only and will continue to be that way.  Low-power devices are one of the biggest categories, as Matthew pointed out.  2.4GHz radios just draw less power.  Older legacy devices are also not going to be upgraded anytime soon.  That means that we can’t just go around shutting off our 2.4GHz SSIDs and leaving all those devices out in the cold.  What it does mean is that we need to start focusing on the future of wireless.  I’m going to treat my 2.4GHz SSIDs just like a guest access VLAN.  It’s there, but it’s not going to get much support.  I’m going to enable band steering to push the 5GHz-capable clients to the better access method.  For everyone else that can only get on at 2.4GHz, you get what you get.  With more room to grow, I can enable wide channels and let my clients pull all the data they can stand from 5GHz.  When the rest of the world gets ready to deploy 802.11ac devices and APs, I’ll already have experience designing for that band.  My 2.4GHz network will live on much the same way my 802.11b clients lived on and my Novell clients persisted.  They’ll keep churning until they are forced to move, either by failure or total obsolescence.


Tom’s Take

Yes, it’s a hard choice to make right this moment to say that I’m leaving 2.4GHz to the wolves and moving to 5GHz.  It’s a lot like making the decision between ripping the band-aid off or pulling it slowly.  Either way, there will be pain.  The question becomes whether you want the pain all up front or spread out over time.  By making a conscious decision to start focusing your efforts of 5GHz, you get the pain out of the way.  Fighting for spectrum and positioning around kitchens and water pipes all fall away.  Coverage takes care of itself.  Neat new technology like 40MHz channels is simple, relatively speaking.  Let the 2.4GHz clients have their network.  I’m going to concentrate my efforts on where we’re headed, not where we’ve been.

No Bridge Too Far – A Quick Wireless Bridge Configuration

I constantly find myself configuring wireless bridges between sites.  It’s a cheaper alternative to using a fiber or copper connection, even if it is a bit problematic at times.  However, I never seem to have the right configuration, either because it was barely working in the first place or I delete it from my email before saving it.  Now, thanks to the magic of my blog, I’m putting this here as much for my edification as everyone else’s.  Feel free to use it if you’d like.

dot11 ssid BRIDGE-NATIVE
 vlan1
 authentication open
 authentication key-management wpa
 wpa-psk ascii 0 security
!
dot11 ssid BRIDGE44
 vlan 44
 authentication open
 authentication key-management wpa
 wpa-psk ascii 0 security
 !
 interface Dot11Radio0
 encryption vlan 1 mode ciphers tkip
 encryption vlan 44 mode ciphers tkip
 ssid BRIDGE-NATIVE
 !
 interface Dot11Radio0.1
 encapsulation dot1Q 1 native
 no ip route-cache
 bridge-group 1
 bridge-group 1 spanning-disabled
 !
 interface Dot11Radio0.44
 encapsulation dot1Q 44
 no ip route-cache
 bridge-group 44
 bridge-group 44 spanning-disabled
 !
 interface FastEthernet0.1
 encapsulation dot1Q 1 native
 no ip route-cache
 bridge-group 1
 bridge-group 1 spanning-disabled
 !
 interface FastEthernet0.44
 encapsulation dot1Q 44
 no ip route-cache
 bridge-group 44
 bridge-group 44 spanning-disabled

This allows you to pass traffic on multiple VLANs in case you want to put a phone or other device on the other side of the link.  Just make sure to turn the switch port connected to the bridge into a trunk so all the information will pass correctly.  As always, if you see an issue with my configuration or you have a cleaner, better way of doing things, don’t hesitate to leave a comment.  I’m always open to a better way of getting things done.

The Rush To 802.11ac

Things seem to be moving quickly in the wireless world.  In January, many presenters at Wireless Field Day 2 were discussing the future of client-side wireless.  It all revolves around 802.11ac, the specification that leads to gigabit-speed wireless connections.  A 5 GHz-only specification, 802.11ac uses wider channels and more spatial streams to create more throughput for client devices.  Whereas a single spatial stream in 802.11n can create about 150 Mbit of bandwidth at a 20 MHz channel bandwidth, the default single spatial stream for 802.11ac can deliver almost three times that at 433 Mbits across an 80 MHz channel.  The numbers just keep climbing from there when you include the ability to use 160 MHz wide channels and up to 8 spatial streams.  No wonder people are salivating over the idea of this new generation of wireless.  So much so that vendors have already started coming out with hardware to support 802.11ac even before the specification is fully ratified.  After Broadcom announced support for 802.11ac in forthcoming chipsets, Buffalo announced an 802.11ac home router and media station.  Cisco has even gotten in on the 802.11ac game with a new module designed to extend the 3600 series AP to support 802.11ac.  The times, they are a changing.  However, for those of you that like to live on the bleeding edge of technology, there are some things to keep in mind as you race out to Best Buy to make your Wi-Fi super fast.

1.  There are no 802.11ac client devices today.  All the horsepower in the world on the infrastructure side of things won’t matter one whit if the clients connecting to the devices don’t have support to interface with these blazing fast speeds.  It wasn’t that long ago at Wireless Field Day 1 that Cisco was telling us the 3×3:3 support in an AP was a bunch of hooey because most laptops on the market don’t have support for three spatial streams.  Of course, a year later and almost all consumer laptops now have the requisite support.  Coincidentally, now so do the newest Cisco APs.  And that’s just the laptop side of things.  Look around right now at the majority of devices that are using the wireless in your house or at the office or even at Starbucks.  Odds are good that those devices are phones or tablets of some kind.  Those present huge issues for 802.11ac adoption.  Take the iPhone and iPad as an example.  The iPhone is a 2.4 GHz-only device.  With 802.11ac being a 5GHz-only protocol, your iPhone won’t even be able to talk to your 802.11ac router.  You’re going to need to keep 2.4 GHz connectivity.  Odds are good that Apple and other phone manufacturers will eventually build 5 GHz radio support into their hardware, but that’s unlikely to happen until there is a way to shrink the size of the antennas and provide better battery life for the throughput.  Even the iPad, which supports 5 GHz radios, had to make some battery life concessions.  It’s a 1×1:1 device, so even on 802.11n it can never use more than 150 Mbits of bandwidth.  It can’t even support the use of 40 MHz channels on 802.11n.  If these devices don’t see hardware changes in the future, it’s going to be difficult for 802.11ac to see much adoption when the majority of “post PC” BYOD devices won’t come close to supporting the new standard.

2.  802.11ac throughput is nice.  In theory.  When you look at the high-end specs for 802.11ac, the amount of data that can be transferred is downright sexy.  However, the same can be said for 802.11n.  There is support for up to four spatial streams in 802.11n, although I’ve never seen APs that support more than 3.  Hitting the top end of 802.11n capabilities requires you to use 40 MHz channels, reducing the number of available channels by 50%.  That’s not a huge concern in the 5 GHz range today due to a larger number of non-overlapping channels.  However, 802.11ac is going to create bigger issues.  The default channel bandwidth in 802.11ac is 80 MHz, so we’ve already reduced the number of available channels by another 50%, so we’re working with a quarter of what we were before.  If you want to really cook on 802.11ac at those sexy data rates, you’re going to need to enable 160 MHz-wide channels.  Now we’re looking at something like 3 or 4 non-overlapping 160MHz channels instead of the 23 that are available by default in the 5GHz range.  Sounds pretty crowded to me.  You also need to realize that 5 GHz doesn’t penetrate walls as well as 2.4 GHz.  A typical 5 GHz deployment today has to have APs clustered a lot closer together to provide the same coverage as a 2.4 GHz setup.  Add in the fact that those wanted to pump out the data are going to want to be as close to the AP as possible to get the best signal and throughput, and you’ve got a real interesting scenario unfolding.  There’s a real possibility of a log jam of frequencies in a given area.  A lot of this can be sorted out by doing proper site surveys and planning.  Hope you’ve been doing them all along because you are definitely going to have to do them before you order your first 802.11ac AP.

3.  802.11ac isn’t a final standard yet.  All that wonderful gigabit wireless goodness is a nice idea, but right now it’s not much more than just that – an idea.  We’re still technically working from the 0.1 draft spec approved in January of 2011.  The finalization of the draft specification isn’t expected to happen until sometime at the end of 2012, and the 802.11 Working Group isn’t expected to give their approval until 2013.  Even after that, it’s going to take quite a while for the devices to really permeate the market.  The 802.11ac devices you can buy today might not even support the final standards.  This is exactly what happened before with 802.11n.  I can remember going into my local office supply company and seeing these new fast Wi-Fi devices based on the 802.11n Draft 1 spec.  The device in question was from Belkin.  Of course, there was a lot that changed from Draft 1 to Draft 2, so those Belkin Draft 1 devices only really work with the Belkin cards that were sold to go with the routers.  When the Draft 2.0 and the finalized 802.11n devices came out later, you basically had to repurchase everything to make it work correctly.  This bit more than a couple of people that I know, and in some cases I got to remind them of that fact when I was pulling out the old Draft 1 equipment and replacing it.  I can see this same thing happening with the current crop of SOHO devices from companies like Buffalo.  There may not be a whole lot that changes between the 802.11ac 0.1 spec and the final draft, but if there is you are going to have to toss out your bleeding edge gear and buy something new.  Larger enterprise vendors like Cisco are more than happy to offer you upgrade protection for your 802.11ac modules if you buy them now to be on the cutting edge.  Of course, because you are buying enterprise-grade APs, you’re going to be paying more up front for them.  Either way, being an early adopter of 802.11ac is going to cost you in the long run, either from buying upgrade protected devices or from buying a whole new kit when the final 802.11ac specs are released.

Tom’s Take

The only two sure things in information technology are death (of products) and upgrades.  Just like the plain jane 802.11 spec and the hopefully-departed 802.11b spec, everything eventually gets old and needs to be replaced by something newer and better.  When you think about the amazement of being able to use a computer without being connected to the network just a few years ago at a staggering 11 Mbits to the ability on the near horizon to pump out more than 1 Gbit of traffic over the airwaves it is almost mind boggling.  However, with any new standard coming out, you need to be careful that you don’t jump on it at the wrong time.  Everyone wants to be the first person on the block to get something new.  Whether it be the first generation iPhone or the newest Intel octo-core chips, there are always going to be those people that want to be on the upward slope of the early adopter curve.  On the other hand, there are people like me that want to take a little extra time to be sure that what is coming out is going to work.  I’m one of those people that tends to wait a few days before upgrading to a new software release before installing it to make sure the bugs are worked out.  I tend to wait to upgrade to a new version of Windows until the first service pack is on the horizon.  When it comes to 802.11ac, I’m going to wait out this first round of products.  I don’t have anything that can take advantage of all that extra speed and I likely won’t have anything in the near future.  I don’t want to go out and spend money today that I’m likely going to have to go back and spend again later. Add in the fact that I really prefer my devices to be based on some sort of standard protocol that other things can interoperate with and you can see that 802.11ac 0.1 is going to be a “pass” in my book.  I know that there are great things on the horizon and I can see them getting closer all the time.  It just takes a little patience to get there.  And if I can learn to have a little patience then so can everyone else.

IPv6 Wireless Support – The Broadcast Problem

When I was at Wireless Field Day 2, my standard question to all the vendors concerned IPv6 support.  Since I’m a huge proponent of IPv6 and the Internet will be arriving at IPv6 rather soon, I wanted to know what kind of plans the various wireless companies had for their particular flavor of access devices.  Most of the answers were the same: it’s coming…soon.  The generic response of “soon” usually means that there isn’t much demand for it.  It could also mean that there are some tricky technical challenges.  My first thought was about the operating system kernels being run on these access points.  Since most APs run some flavor of BSD/Linux, kernel space can be a premium.  Based on my own experiments trying to load DD-WRT on Linksys wireless routers, I know that the meager amount of memory on these little things can really restrict the feature sets available to network rock stars.  So it was that I went on thinking about this until I had a chance conversation with Matthew Gast (@MatthewSGast) from Aerohive.  Matthew is the chair for the IEEE 802.11 committee.  Yes, that means he’s in charge of running the ship for all the little subletters that drive wireless standards.  I’d say he’s somewhat familiar with wireless.  I spent some time at a party one night talking to him about the challenges of shoehorning IPv6 support into a wireless AP.  His answers were rather enlightening and may have caused one of my brain cells to explode.

Matthew started things off by telling me about wireless keys.  If you pick up Matthew’s book 802.11 Wireless Networks: The Definitive Guideyou can flip over to page 465 to read about the use of keys in wireless.  Keys are used to ensure that all the traffic flying around in the air between clients stays separated.  That’s a rather important thing when you consider how much data gets pushed around via wireless.  However, the frames that carry those keys are limited in the amount of space they have to carry key information.  So some time ago, the architects of 802.11 took a shortcut.  Rather than duplicating key information over and over again for every possible scenario, they decided to make the broadcast key for each wireless client identical.  This saved space in the packet headers and allowed the AP to send broadcasts to all clients connected to the AP.  They relied on the higher layer mechanisms inherent in ARP and layer 3 broadcast control to prune away unnecessary traffic.  Typically, clients will not respond to a broadcast for a different subnet than the one they are attached to.  The major side effect is that clients may hear broadcasts for VLANs for which they are not a member of.  For the most part, this hasn’t been a very big deal.  That is, until IPv6 came about.

Recall, if you will, that IPv6 uses multicast mechanisms to propagate advertisements about neighbor discovery and router advertisement (RA).  In particular, these RAs tell the IPv6-enabled clients about available routers that can be used to exit the local network.  Mulitcast is a purely layer 3 construct.  At layer 2 (and below), multicasts turn into broadcasts.  This is the mechanism that ensures that non-layer 3 aware devices can receive the traffic.  Now, think about the issue above.  Broadcast keys are all the same for clients no matter which VLAN they may be attached to.  Multicast RAs get converted to broadcasts at layer 2.  Starting to see a problem yet?

Let’s say that we have 3 VLANs in a site, VLAN 21, VLAN 42, and VLAN 63.  We are a member of VLAN 63, but we use the same SSID for all 3 VLANs.  If we turn on IPv6 for each of these three VLANs, we now have 3 different devices sending out RAs and SLAAC packets for addressing hosts.  If these multicast packets are converted into broadcast packets for the SSID, all three VLANs are going to see the same broadcast.  The VLAN information is inconsequential to the broadcast key on the AP.  We’re going to see the RAs for the routers in VLAN 21 and VLAN 42 on top of the one in VLAN 63.  All of these are going to get installed as valid exit points off the local network.  As well, the end system may even assign a SLAAC address to itself with a router from a different VLAN.  According to the end system, it heard about all of these networks, so they must all be valid, right?  The system doesn’t know that it won’t have a layer 2 path to them.  Worse yet, if one of those RAs has the best metric for getting off the local LAN, it’s going to be the preferred exit point.  The end system will be unable to communicate with the exit point.  Bummer.

How do we fix this problem?  Well, the current thinking revolves around suppressing the broadcasts at layer 2.  Cisco does this by default in their wireless controllers.  The WLAN controller acts as a DHCP relay and provides proxy ARP while ignoring all other broadcast traffic.  That’s great to prevent the problem from happening right now.  What happens when the problem grows in the future and we can no longer simply ignore these multicast/broadcast packets.  Thankfully, Matthew had the answer for that as well.  In 802.11ac, the new specification for gigabit speed wireless, they’ve overhauled all the old key mechanisms.  No longer will the broadcast key be shared among all clients on the same AP.  Here’s hoping that we can get some 802.11ac clients and APs out there and supported when the time comes to flip the big switch to IPv6.


I’d like to thank Matthew Gast for his help in creating this blog post and pointing out the problems inherent in broadcast key caching.  I’d also like to thank Andrew von Nagy (@revolutionwifi) for translating Matthew’s discussion into terms a non-wireless guy like me can understand.

Ruckus – Wireless Field Day 2

Our final presenters for Wireless Field Day 2 came from Ruckus Wireless.  I had heard some interesting things about Ruckus and wanted to dig a little deeper into their technology.  We arrived at the Ruckus offices and met up with GT Hill again, fresh from his appearance at the Wireless Mobility Symposium the previous Wednesday.  We also met David Callisch, the vice president of marketing for Ruckus.  Our conference room for the presentation was a little cramped, but it was packed to the gills with Ruckus technology and snacks of all kinds (including M&Ms and Jelly Belly jellybeans).  They even had Diet Dr. Pepper!  They also live their gimmick to the fullest, as all the snacks were served in Ruckus dog bowls and there were “Beware of Dog!” signs posted copiously throughout the office.

We kicked off with a quick chat with Selina Lo, president and CEO.  She welcomed us and  gave us a little info about Ruckus.  Afterwards, David Callisch gave us the whole background of Ruckus and where their previous designs and implementations had focused.  Ruckus seems to cater mostly to the carrier spaces, especially in challenging RF environments like large cities or very dense deployments.  One of the nice side effects of this focus is that all the improvements in their technology from the carrier side filter down into the enterprise line of access points as well.  That’s a great thing for those of us that don’t necessarily play in the large deployment space but want to enjoy the fruits of those labors.

Next, GT said that he had a special treat for us.  He brought in one of the founders of Ruckus, Victor Shtrom.  I could try to do this video justice, but I would fail:

If that didn’t make your brain explode, go back and watch it again.  Victor has probably forgotten more about antenna design and waveform modulation that I’ll ever know.  His dissection of issues encountered with beamforming and signal modulation had the same effect as my conversations with Matthew Gast the night before.  Hence, I’m now running a few brain cells short due to explosion from awesome knowledge.  This is what Tech Field Day is about.  Access to the nerd knobs and the people that tweak them.  I highly recommend watching that video more until you understand what makes the Ruckus AP antenna and software design so different.

After Victor’s 45 minutes of melting my brain, GT got back up to show us one of Ruckus’s cool little secrets, ChannelFly.  According to GT, ChannelFly leverages the BeamFlex technology and software algorithms and using it to perform a channel analysis of the surrounding RF environment.  We’ve always been told as wireless professionals that in the 2.4 GHz spectrum, channels 1, 6, and 11 are the targets for non-overlapping signals.  The problem comes in the real world when every AP out there is on those three channels. What happens when we need to increase the AP density or retrofit APs into an existing design?  Co-channel interference becomes a real issue.  This is where the ChannelFly technology comes in.  The Ruckus AP sits in the middle of all this interference.  And it listens.  ChannelFly usually takes about 24-48 hours to really dial in to the RF environment.  Afterwards, it takes all the RF data that it has compiled and sets itself to the most appropriate channel to provide the highest throughput.  It does this for all channels in 2.4 GHz, not just the magic three.  The added side benefit from this is that the Ruckus APs can coexist with the current AP deployment without interference.  That’s because the best channel with the highest throughput usually just happens to be the one with the least amount of interference for the RF environment.  As I put it during the presentation, “ChannelFly makes everyone happy by being selfish.”

Towards the end, we got a bit of a quick presentation over 802.11u from David Stiff and Wilson So.  David was a presenter at WFD1, albeit with a different organization.  This time, he strayed from spectrum analysis and gave us some highlights of 802.11u.  This technology is often referred to as “mobile hotspot”.  It gives users the ability to join their phones to a WLAN using authentication from public areas.  Think about your iDevice when you go into Starbucks.  Thanks to the agreements that Apple has in place with Starbucks, your iDevice has free access to the Wi-Fi at any one of their locations.  When you walk in the front door, you are instantly connected.  It’s a cool way to ensure that you’re using the Wi-Fi whenever possible.  Now, with 802.11u, extend that idea to be virtually any carrier device.  Think about walking into a sports arena or a bank and getting instant Wi-Fi access from your carrier.  Your phone’s SIM card authenticates you against the APs in the area and tells the carrier to offload your data package onto the wireless network instead of the cellular network.  Do you think carriers are excited about conserving spectrum while simultaneously giving their customers high-speed data access?  I’m sure they’re falling all over themselves to get this technology.  Unlike last year, we got a live demonstration from Wilson So of 802.11u in action.  The mobile phone authenticated via encrypted SIM and joined an AP cleverly hidden in a cardboard box.  Not the flashiest demo out there, but when you think about what it takes to get the technology to the point where it not only works, but works reliably enough to demo in front of the Dragon’s Den of wireless audiences, that’s a pretty impressive demo indeed.

After our 802.11u discussion, we got a tour of the facilites from Steven Martin, vice president of engineering.  He showed us some very interesting test chambers that Ruckus uses to isolate and sources of interference to provide a good reference for the antenna and software to work from.  They can also introduce interference sources in the test chambers to measure how the BeamFlex technology adapts to different environments.  Very cool stuff.

Ruckus’s Oprah Moment consisted of a Ruckus 7962 AP, a ZoneDirector management controller, and a couple of stuffed puppies. My kids especially like the big black lab stuffed pet.  My little dog, on the other hand, isn’t as fond of it.

If you’d like to see more from Ruckus, you can head over to their website at http://www.ruckuswireless.com/.  You can also follow them on Twitter as @RuckusWireless.

Tom’s Take

Ruckus is definitely the most interesting dog in the fight when it comes to RF technology.  They have a unique perspective on creating value by addressing things that other vendors don’t bother with.  They’ve got the technical talent and the rock stars to make a big splash, and their name comes up often when discussing new and innovative wireless technology. I think that by addressing the layer 1 RF issues, they’ve carved an interesting niche away from the wireless industry as a whole.  Niches aren’t a bad thing in the least.  They can either provide you a safe shelter to weather a storm.  Or they can give you a nice base to launch from to take the industry by storm.  Only time will tell what’s in store for the big dogs at Ruckus.

Wireless Field Day 2 Disclaimer

Ruckus was a sponsor of Wireless Field Day 2.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Wireless Field Day 2. In addition, they provided me a Ruckus 7962 AP, a ZoneDirector management controller, and a couple of stuffed puppies.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

HP – Wireless Field Day 2

The penultimate presentation at Wireless Field Day 2 was from HP.  Their wireless unit had presented at Wireless Field Day 1 and had a 2-hour slot at WFD2.  We arrived at the soon-to-be demolished HP Executive Briefing center in Cupertino and paid our final respects to the Dirty Chai Machine:

First off, I want you to read their presentation from WFD1.  Go ahead, I’ll wait.  Back?  Good.  For starters, the wireless in the EBC wasn’t working for everyone.  Normally, I’d have just plugged in the provided 15-foot Ethernet cord, but as I was running on my new Macbook Air, I was sans-Ethernet for the first time.  We finally got the Internet going by foregoing the redirect to the captive portal and just going there ourselves, so I wasn’t overly concerned.  Rob Haviland then got us started with an overview of HP’s wireless product line:

With all due respect to Rob, I think he kind of missed the mark here.  I’ve been told by many people that Rob is a very bright guy from the 3Com/H3C acquisition and did a great job getting technical at Interop.  However, I think the presentation here for HP Wireless was aimed at the CxO level and not for the wireless nerds.  As you watch the video, you’ll hear Rocky Gregory chime in just a bit into the presentation that talking to us about the importance of a wireless site survey is a bit like preaching to the choir.  We do this stuff all day every day in our own jobs.  We not only know the importance of things like this, we evangelize it to people as well.  It reminded me a bit of the WFD1 Cisco presentation over CleanAir that Jennifer Huber had given several time to her customers.  In fact, I even asked during the presentation if these “new” access points Rob was talking about were different from the ones we saw previously.  With one exception, they weren’t.  The new AP is the 466-R, an outdoor version of the MSM466.  It’s a ruggedized AP designed to be hung almost anywhere, and it even includes a heater!  Of course, if you want the heater to work, you need to be sure to provide 802.3at power or an external power supply.  Unlike the Cisco Aironet bridges that I’m familiar with implementing, the MSM466-R uses an RJ-45 connection to hook it into the network as opposed to the coax-to-power-injector method.  I’m not entirely sure I’m comfortable running at Cat-5 cable out of my building and plugging it directly into the AP.  I’d much rather see some kind of midspan device sitting inline to provide a handoff.  That’s just me, though.  The MSM466-R also weighs about a third of what comparable outdoor APs weigh, according to Jennifer, who has put some of these in for her customers.  We also spent some time talking about advanced features like band steering your clients away from 2.4 GHz to 5 GHz and the impact that can have on latency in voice calls.  It appears to take 200 msec for a client to be steered toward the 5 GHz radio on an AP according to HP, which can cause hiccups and delay in the voice call.  Sam Clements wondered if the values for those timers were configurable at all, but according to HP they are not.  This could be a huge impact for clients on VoIP calls on a laptop that is roaming across a wireless campus.  I think I’m going to have to spend a little more time digging into this.

After a 10 minute break, we jumped into the new controller that HP is offering, the MSM720 mobility controller.  This unit is marketed toward the lower end of the product line and is targeted to the market of less that 40 APs.  In fact, 40 is the most it will hold.  There is a premium version of the MSM720 that doesn’t hold any more APs but does turn on some additional capabilities like high availability and real-time location services.  This generated a big discussion about licensing models and the desire for customers to absorb additional costs to find out they gained significant features.  I work in a vertical where people are very price-sensitive.  But I also understand that many of the features that we use to market products to people evaporate when you start reducing the “licensed features”.  I’d rather see the most commonly requested features bundled into a single “base” license and they negotiate price points after we’ve agreed on features.  That is a much easier sell that demonstrating all the cool things a product can do, only to have to explain to the customer after the fact, “Well, there is this other license you need…”.  All companies are guilty of this kind of transgression, so I’m not just singling out HP here.  They just happened to be at the watershed moment for our outpouring of distaste over licensing.  The MSM720 is a fine product for the small to medium business that wants the centralized control capability of a controller without breaking the bank.  I’m just not sure how many of them I would end up selling in the long run.

HP’s Oprah Moment was a 2.4 GHz wireless mouse with micro receiver and a pen and paper set.

If you’d like to learn more about HP Wireless, you can check out their website at http://www.hp.com/united-states/wireless/index.html.  You can also follow along with all of their network updates on Twitter as @HP_Networking.

Tom’s Take

This may have been the hardest Tech Field Day review I’ve written.  I feel that HP missed an opportunity here to help show us what makes them so different in wireless.  We got a short overview of technologies we’re already familiar with and two new products targeted at very specific market segments.  The most technical part of our discussion was a block diagram of the AP layout.  There wasn’t any new technology from HP apart from a ruggedized AP.  No talk of Hotspot 2.0 or 802.11ac Gigabit wireless.  In retrospect, after getting to hear from people like Matthew Gast and Victor Shtrom, it was a bit of let down.  I feel like this was a canned presentation designed to be pitched to decision makers and not technical people.  We want nerd knobs and excruciating detail.  From what I’ve heard of Rob Haviland, he can give that kind of presentation.  So, was this a case of of being ill prepared?  Or missing the target audience?  I’m also wondering if the recent upper level concerns inside of HP have caused distraction for the various business units.  The networking people shouldn’t have really been affected by the PSG and Autonomy dealings but who knows at this point.  Is the Mark Hurd R&D decision finally starting to trickle in?  Maybe HP believes that their current AP lineup is rock solid and will keep on trucking for the foreseeable future?  Right now, I don’t have answers to these questions and I don’t know where to find them.  Until I do find those answers though, I’m going to keep a wary eye on HP Wireless.  They’ve shown in the past that they have the capability to impress and innovate.  Now they have to prove it to me again.

Wireless Field Day 2 Disclaimer

HP was a sponsor of Wireless Field Day 2.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Wireless Field Day 2. In addition, they provided me with a 2.4 GHz wireless mouse with micro receiver and a pen and paper set.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Aruba – Wireless Field Day 2

Day 2 of Wireless Field Day 2 kicked off with a double (4-hour) session at Aruba Networks.  I’ve worked with Aruba a little bit in the past, but my experience with them was not as great as HP or Cisco.  I’m pretty sure that I’m going to see a lot of them in the future, so I was excited to get to pick the brains of some of their brightest stars.

After raiding the continental breakfast table at the Aruba Executive Briefing Center, we were welcomed by Ozer Dondurmacioglu (@ozwifi), the Product Marketing Manager for Aruba.  He gave us a quick overview of the layout of the room, with the all important Wi-Fi instructions and directions to the bathroom.  We were then greeted by Keerti Melkote, one of the founders of Aruba and the current Chief Strategy Officer.  Here’s a link to his 1 hour talk about the shift of the market to a primarily Wi-Fi driven environment:

Of course, he’s spot on with a lot of these dissections of the current wireless landscape.  I’ve seen many of my customers moving away from using cables as the primary network connection method to being more free to move around.  Wireless has gone from a cool thing to have in the conference room to a necessity of doing business, as I’m constantly reminded when the wireless around here doesn’t work.  One of the other things that I’m pleased to see that Aruba is “getting” is that security in the wireless realm is integral to the medium.  With all of these bits flying around over our heads, trying to bolt on security after-the-fact is only going to lead to disaster.  By ensuring that security is part and parcel from the very beginning, Aruba is making a long step toward ensuring end-to-end security is integrated.

After the first presenter, Ozer treated us to an interactive game of “How Big Of An Airhead Are You?” Named after the Aruba Airhead’s community site, this little trivia game was a great way to poke some fun at people while at the same time keeping us interacting during the long session.  It doesn’t hurt that the prize for getting the questions right was an Aruba Instant AP-135.  We all had a good laugh or two and moved on to the second presenter.

We were treated to a discussion about BYOD from Aruba from a couple of the AirWave product managers, Carlos Gomez and Cameron Esdaile.  These two Aussie gents gave us a great talk about the need for things like self-service captive portal registration for wireless connectivity as well as the ability to push settings to devices to restrict access to resources.  A lot of the development around BYOD restrictions and control seems to be aimed at iOS devices from the Cupertino Fruit, Computer, and Tablet Company.  I don’t know if this speaks to the popularity of those devices or the ease with which the Mobile Device Management (MDM) APIs are available.  In fact, the majority of the time I ask about having a similar feature set on Android, the response is usually “Soon…”.  I’m waiting for the day when Android reaches parity with that other mobile device OS.  Another round of HBOAAAY followed and more AP-135s were handed out.

The final session was centered around the Aruba Instant AP itself.  I was a little curious about the reasoning.  Why concentrate on something designed for such a small deployment base.  Thankfully, Pradeep Iyer was ready to bring the good stuff and showed me why Aruba Instant is such an interesting technology.  It turns out that a lot of thought went into the development of Aruba Instant, from the ability to connect to a setup SSID after unboxing so no cables are needed, to the design of the GUI for management and configuration of Aruba Instant.  I’m going to take a moment to talk about this because I think people are finally starting to realize that running your GUI in Java or Flash is a “bad thing”.  The Aruba Instant GUI is coded entirely in HTML5.  That means it can be rendered on any modern browser, including Mobile Safari.  The boxes containing information in the GUI also dynamically adjust to fit screen width without scroll bars, because according to Pradeep “scrollbars are evil” (he’s right).  They also do some ingenious things like making the default language of the GUI dependent on the system language of the laptop that launched it.  Strikingly brilliant in hindsight, I think.  The graphs on the pages are also drawn with a logarithmic scale, so you don’t have random high spikes making the rest of your graph about .01 mm tall.  Great thinking there as well.

Blake Krone from the NSA Show podcast must have gotten bored with our GUI love because he swung the conversation toward radio frequency (RF).  At the forefront of conversation was the ability of Aruba APs to do in-band spectrum analysis with their Atheros chipsets.  Historically, APs couldn’t serve clients and do spectrum analysis at the same time.  Cisco’s solution to this problem was to buy Cognio and integrate their spectrum analysis chips into the 3500/3600 APs as CleanAir.  Aruba says that they can now do the same thing without a dedicated chip in their APs.  This does run counter to what I (and many others) have always been told, so it will be interesting to see how this feature works out.  RF discussions are always interesting because they technology they are based on changes so rapidly that having a similar talk even just six months ago would have resulted in vastly different answers.  After the final presentation, we heard from Ozer one last time and were give an Aruba RAP-2WG, a small AP the size of a deck of cards.  This one functions more like a business card for Aruba.  Since it requires an Aruba controller to operate, this one is attached to a development controller at Aruba’s headquarters.  When you hook it up, it generates an SSID that you join.  When you try to go to the web, the request is redirected to an Aruba splash page that tells you all about the Aruba wireless offerings.  You can still do some web surfing and Internet access from it, but you can’t reconfigure it unless you have an Aruba wireless controller.  A pretty neat idea, and it definitely beats all the USB drives I seem to collect at trade shows.

If you’d like to learn more about Aruba, you can check out their website at http://www.arubanetworks.com.  You can also follow them on Twitter as @ArubaNetworks.  You can also head over to their Airheads Community site and interact with lots of Aruba users, customers, and employees.  You can find the Airheads at http://community.arubanetworks.com.

Tom’s Take

Aruba has some interesting products that seem to be transitioning to some new user-friendly GUI designs, both from the Instant AP and controller UIs to the ease with which the AmigoPod can help ease BYOD setup.  I think that their attention to the little details that we all see when we manage networks and seem to complain about (but never bother to give feedback to fix) will help them ease those that are looking to move up from a consumer-grade wireless vendor or make a jump from another enterprise solution.  It became clear to me during this presentation that Aruba is firmly in the number two slot when it comes to challenging for the crown of wireless.  The question is whether or not they can make gains on Cisco while the rest of the pack catches up to them.

Wireless Field Day 2 Disclaimer

Aruba was a sponsor of Wireless Field Day 2.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Wireless Field Day 2. In addition, they provided me with an Aruba Instant AP-135 access point, an Aruba RAP-2WG access point, an Aruba polo shirt, and an Aruba pen.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.