Accelerating E-Rate

ERateSpeed

Right after I left my job working for a VAR that focused on K-12 education and the federal E-Rate program a funny thing happened.  The president gave a speech where he talked about the need for schools to get higher speed links to the Internet in order to take advantage of new technology shifts like cloud computing.  He called for the FCC and the Universal Service Administration Company (USAC) to overhaul the E-Rate program to fix deficiencies that have cropped up in the last few years.  In the last couple of weeks a fact sheet was released by the FCC to outline some of the proposed changes.  It was like a breath of fresh air.

Getting Up To Speed

The largest shift in E-Rate funding in the last two years has been in applying for faster Internet circuits.  Schools are realizing that it’s cheaper to host servers offsite either with software vendors or in clouds like AWS than it is to apply for funding that may never come and buy equipment that will be outdated before it ships.  The limiting factor has been with the Internet connection of these schools.  Many of them are running serial T-1 circuits even today.  They are cheap and easy to install.  Enterprising ISPs have even started creating multilink PPP connections with several T-1 links to create aggregate bandwidth approaching that of fiber connections.

Fiber is the future of connectivity for schools.  By running a buried fiber to a school district, the ISP can gradually increase the circuit bandwidth as a school increases needs.  For many schools around the country that could include online testing mandates, flipped classrooms, and even remote learning via technologies like Telepresence.  Fiber runs from ISPs aren’t cheap.  They are so expensive right now that the majority of funding for the current year’s E-Rate is going to go to faster ISP connections under Priority 1 funding.  That leaves precious little money left over to fund Priority 2 equipment.  A former customer of mine spent the Priority 1 money to get a 10Gbit Internet circuit and then couldn’t afford a router to hook up to it because of the lack of money leftover for Priority 2.

The proposed E-Rate changes will hopefully fix some of those issues.  The changes call for  simplification of the rules regarding deployments that will hopefully drive new fiber construction.  I’m hoping this means that they will do away with the “dark fiber” rule that has been in place for so many years.  Previously, you could only run fiber between sites if it was lit on both ends and in use.  This discouraged the use of spare fiber, or dark fiber, because it couldn’t be claimed under E-Rate if it wasn’t passing traffic.  This has led to a large amount of ISP-owned circuits being used for managed WAN connections.  A very few schools that were on the cutting edge years ago managed to get dedicated point-to-point fiber runs.  In addition, the order calls for prioritizing funding for fiber deployments that will drive higher speeds and long-term efficiency.  This should enable schools to do away with running multimode fiber simply because it is cheap and instead give preferential treatment to single mode fiber that is capable of running gigabit and 10gig over long distances.  It should also be helpful to VARs that are poised to replace aging multimode fiber plants.

Classroom Mobility

WAN circuits aren’t the only technology that will benefit from these E-Rate changes.  The order calls for a focus on ensuring that schools and libraries gain access to high speed wireless networks for users.  This has a lot to do with the explosion of personal tablet and laptop devices as opposed to desktop labs.  When I first started working with schools more than a decade ago it was considered cutting edge to have a teacher computer and a student desktop in the classroom.  Today, tablet carts and one-to-one programs ensure that almost every student has access to some sort of device for research and learning.  That means that schools are going to need real enterprise wireless networks.  Sadly, many of them that either don’t qualify for E-Rate or can’t get enough funding settle for SMB/SOHO wireless devices that have been purchase for office supply stores simply because they are inexpensive.  It causes the IT admins to spend entirely too much time troubleshooting these connections and distracting them from other, more important issues. It think this focus on wireless will go a long way to helping alleviate connectivity issues for schools of all sizes.

Finally, the FCC has ordered that the document submission process be modernized to include electronic filing options and that older technologies be phased out of the program. This should lead to fewer mistakes in the filing process as well as more rapid decisions for appropriate technology responses.  No longer do schools need to concern themselves with whether or not they need directory assistance on their Priority 1 phone lines.  Instead, they can focus on their problem areas and get what they need quickly.  There is also talk of fixing the audit and appeals process as well as speeding the deployment of funds.  As anyone that has worked with E-Rate will attest, the bureaucracy surrounding the program is difficult for anyone but the most seasoned professionals.  Even the E-Rate wizards have problems from year to year figuring out when an application will be approved or whether or not an audit will take place.  Making these processes easier and more transparent will be good for everyone involved in the program.


Tom’s Take

I posted previously that the cloud would kill the E-Rate program as we know it.  It appears I was right from a certain point of view.  Mobility and the cloud have both caused the E-Rate program to be evaluated and overhauled to address the changes in technology that are now filtering into schools from the corporate sector.  Someone was finally paying attention and figured out that we need to address faster Internet circuits and wireless connectivity instead of DNS servers and more cabling for nonexistent desktops.  Taking these steps shows that there is still life left in the E-Rate program and its ability to help schools.  I still say that USAC needs to boost the funding considerably to help more schools all over the country.  I’m hoping that once the changes in the FCC order go through that more money will be poured into the program and our children can reap the benefits for years to come.

Disclaimer

I used to work for a VAR that did a great deal of E-Rate business.  I don’t work for them any longer.  This post is my work and does not reflect the opinion of any education VAR that I have talked to or have been previously affiliated with.  I say this because the Schools and Libraries Division (SLD) of USAC, which is the enforcement and auditing arm, can be a bit vindictive at times when it comes to criticism.  I don’t want anyone at my previous employer to suffer because I decided to speak my mind.

Causing A Network Ruckus

ruckuslogo

The second presentation of day 2 of Network Field Day was from Ruckus wireless. Yes, a wireless company at a non-wireless Field Day event. I had known for a while that Ruckus wanted to present at Network Field Day and I was excited to see what they would bring. My previous experience with Ruckus was very enlightening. I wanted to see how they would do outside the comfort zone of a wireless event. Add in the fact that most networks are now becoming converged from the perspective of offering both wired and wireless access and you can see the appeal of being the only wireless company on the slate.

We started off with a talk from GT Hill (@GTHill). GT is one of those guys that started out very technical before jumping into the dark side of marketing. I think his presentation should be required viewing for those that think they may want to talk to any Tech Field Day group. GT had a lot of energy that he poured into his talk.  I especially loved how he took a few minutes at the beginning to ask the delegates about their familiarity with wireless.  That’s not something you typically see from a vertical-focused field day like NFD, but it does get back to the cross discipline aspect that makes the greater Tech Field Day events so great.  Once GT had an idea of what we all knew he kept each and every one of the delegates engaged as he discussed why wireless was so hard to do compared to the “simplicity” of wired networking. Being a fan of explaining technical subjects with easy-to-understand examples, I loved GT using archery as a way to explain the relative difficulty of 802.11 broadcasts in 802.11n and 802.11ac.

The second part of the discussion from Sandip Patel about 802.11ac was great. I didn’t get a chance to hear the presentations from the other wireless vendors at Wireless Field Day 3 & 4. Picking up all the new information regarding things like channel bandwidth and multi-user spatial streams was very nice for me.  There’s a lot of new technology being poured into 802.11ac right now.  There’s also a lot that’s being prepped for the future as well.  While I knew that 160 MHz channels were going to be necessary to get the full bandwidth rates out of 802.11ac, I was unaware that you could have two 80 MHz channels simultaneously working together to provide that.  You learn something awesome at every Field Day event.  I think 802.11ac is going to push a lot of lesser vendors out of the market before all is said and done.  The huge leap forward for throughput comes with a great cost insofar as making sure that your wireless radios work correctly while at the same time accommodating noise and interference.  Companies like Cisco and Aruba are going to come out okay just by virtue of being so large.  Aerohive should come out fine as well.  I think Ruckus has taken a unique approach with their antenna technology.  That shows in these presentations, as Ruckus will be the first to tell you that their superior transmitting technology means that the signal will be cleaner between client and AP.  I want to see a real 802.11ac from every wireless company put together in a room with various noise producers to see what happens.  Maybe something for Wireless Field Day 5?

After we shut off the cameras, we got to take tour of the Ruckus testing facilities.  Since Ruckus had moved buildings since Wireless Field Day 2 it was a brand new room.  There was a lot more room than the previous testing area that we’d seen before.  They still had a lot of the same strange containers and rooms designed to subject access point radios to the strangest RF environments imaginable.  In the new building, there was just a lot more elbow room to walk around along with more tables to spread out and get down to the nuts and bolts of testing.

If you’d like to learn more about Ruckus Wireless and their solutions, you can check them out at http://www.ruckuswireless.com.  You can also follow them on Twitter as @ruckuswireless.


Tom’s Take

While the Ruckus presentation was geared more toward people who weren’t that familiar with the wireless space, I loved it nonetheless.  GT Hill related to a group of non-wireless people in the best way I could imagine.  Sandip brought a lot of info about 802.11ac to the table now that the vendors are starting to ramp up towards putting out enterprise APs.  Ruckus wanted to show everyone that wireless is an important part of the conversation when it comes to the larger networking story.  While we spend a lot of time at NFD talking about SDN or data centers or other lofty things, it’s important to remember that our tweets and discussion and even our video coverage is coming over a wireless network of some kind.  Going to a vendor without some form of wireless access is a demerit in their case.  I’ve always made a point of paying attention once I see that something is everywhere I go.  Thankfully, Ruckus made the right kind of noise to make the delegates sit up and pay attention.

Tech Field Day Disclaimer

Ruckus was a sponsor of Network Field Day 5.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 5.  In addition, Ruckus provided me with lunch at their offices.  They also provided a custom nameplate and a gift package containing a wireless access point and controller.  At no time did they ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Additional Network Field Day 5 Coverage

Terry Slattery – Network Field Day 5: Ruckus Wireless

Pete Welcher – Network Field Day 5: Ruckus Wireless Comments

Pete Welcher – Testing WLAN and Network Management Products

Cisco Borderless Idol

Cisco Logo

Day one of Network Field Day 5 (NFD5) included presentations from the Cisco Borderless team. You probably remember their “speed dating” approach at NFD4 which gave us a wealth of information in 15 minute snippets. The only drawback to that lineup is when you find a product or a technology that interests you there really isn’t any time to quiz the presenter before they are ushered off stage. Someone must have listened when I said that before, because this time they brought us 20 minute segments – 10 minutes of presentation, 10 minutes of demo. With the switching team, we even got to vote on our favorite to bring the back for the next round (hence the title of the post). More on that in a bit.

6500 Quad Supervisor Redundancy

First up on the block was the Catalyst 6500 team. I swear this switch is the Clint Howard of networking, because I see it everywhere. The team wanted to tell us about a new feature available in the ((verify code release)) code on the Supervisor 2T (Sup2T). Previously, the supervisor was capable of performing a couple of very unique functions. The first of these was Stateful Switch Over (SSO). During SSO, the redundant supervisor in the chassis can pick up where the primary left off in the event of a failure. All of the traffic sessions can keep on trucking even if the active sup module is rebooting. This gives the switch a tremendous uptime, as well as allowing for things like hitless upgrades in production. The other existing feature of the Sup2T is Virtual Switching System (VSS). VSS allows two Sup2Ts to appear as one giant switch. This is helpful for applications where you don’t want to trust your traffic to just one chassis. VSS allows for two different chassis to terminate Multi-Chassis EtherChannel (MLAG) connections so that distribution layer switches don’t have a single point of failure. Traffic looks like it’s flowing to one switch when in actuality it may be flowing to one or the other. In the event that a Supervisor goes down, the other one can keep forwarding traffic.

Enter the Quad Sup SSO ability. Now, instead of having an RPR-only failover on the members of a VSS cluster, you can setup the redundant Sup2T modules to be ready and waiting in the event of a failure. This is great because you can lose up to three Sup2Ts at once and still keep forwarding while they reboot or get replaced. Granted, anything that can take out 3 Sup2Ts at once is probably going to take down the fourth (like power failure or power surge), but it’s still nice to know that you have a fair amount of redundancy now. This only works on the Sup2T, so you can’t get this if you are still running the older Sup720. You also need to make sure that your linecards support the newer Distributed Forwarding Card 3 (DFC3), which means you aren’t going to want to do this with anything less than a 6700-series line card. In fact, you really want to be using the 6800 series or better just to be on the safe side. As Josh O’brien (@joshobrien77) commented, this is a great feature to have. But it should have been there already. I know that there are a lot of technical reasons why this wasn’t available earlier, and I’m sure the increase fabric speeds in the Sup2T, not to mention the increased capability of the DFC3, are the necessary component for the solution. Still, I think this is something that probably should have shipped in the Sup2T on the first day. I suppose that given the long road the Sup2T took to get to us that “better late than never” is applicable here.

UCS-E

Next up was the Cisco UCS-E series server for the ISR G2 platform. This was something that we saw at NFD4 as well. The demo was a bit different this time, but for the most part this is similar info to what we saw previously.


Catalyst 3850 Unified Access Switch

The Catalyst 3800 is Cisco’s new entry into the fixed-configuration switch arena. They are touting this a “Unified Access” solution for clients. That’s because the 3850 is capable of terminating up to 50 access points (APs) per stack of four. This think can basically function as a wiring closet wireless controller. That’s because it’s using the new IOS wireless controller functionality that’s also featured in the new 5760 controller. This gets away from the old Airespace-like CLI that was so prominent on the 2100, 2500, 4400, and 5500 series controllers. The 3850, which is based on the 3750X, also sports a new 480Gbps Stackwise connector, appropriately called Stackwise480. This means that a stack of 3850s can move some serious bits. All that power does come at a cost – Stackwise480 isn’t backwards compatible with the older Stackwise v1 and v2 from the 3750 line. This is only an issue if you are trying to deploy 3850s into existing 3750X stacks, because Cisco has announced the End of Sale (EOS) and End of Life (EOL) information for those older 3750s. I’m sure the idea is that when you go to rip them out, you’ll be more than happy to replace them with 3850s.

The 3850 wireless setup is a bit different from the old 3750 Access Controller that had a 4400 controller bolted on to it. The 3850 uses Cisco’s IOS-XE model of virtualizing IOS into a sort of VM state that can run on one core of a dual-core processor, leaving the second core available to do other things. Previously at NFD4, we’d seen the Catalyst 4500 team using that other processor core for doing inline Wireshark captures. Here, the 3850 team is using it to run the wireless controller. That’s a pretty awesome idea when you think about it. Since I no longer have to worry about IOS taking up all my processor and I know that I have another one to use, I can start thinking about some interesting ideas.

The 3850 does have a couple of drawbacks. Aside from the above Stackwise limitations, you have to terminate the APs on the 3850 stack itself. Unlike the CAPWAP connections that tunnel all the way back to the Airespace-style controllers, the 3850 needs to have the APs directly connected in order to decapsulate the tunnel. That does provide for some interesting QoS implications and applications, but it doesn’t provide much flexibility from a wiring standpoint. I think the primary use case is to have one 3850 switch (or stack) per wiring closet, which would be supported by the current 50 AP limitation. the othe drawback is that the 3850 is currently limited to a stack of four switches, as opposed to the increased six switch limit on the 3750X. Aside from that, it’s a switch that you probably want to take a look at in your wiring closets now. You can buy it with an IP Base license today and then add on the AP licenses down the road as you want to bring them online. You can even use the 3850s to terminate CAPWAP connections and manage the APs from a central controller without adding the AP license.

Here is the deep dive video that covers a lot of what Cisco is trying to do from a unified wired and wireless access policy standpoint. Also, keep an eye out for the cute Unifed Access video in the middle.

Private Data Center Mobility

I found it interesting this this demo was in the Borderless section and not the Data Center presentation. This presentation dives into the world of Overlay Transport Virtualization (OTV). Think of OTV like an extra layer of 802.1 q-in-q tunneling with some IS-IS routing mixed in. OTV is Cisco’s answer to extending the layer 2 boundary between data centers to allow VMs to be moved to other sites without breaking their networking. Layer 2 everywhere isn’t the most optimal solution, but it’s the best thing we’ve got to work with the current state of VM networking (until Nicira figures out what they’re going to do).

We loved this session so much that we asked Mostafa to come back and talk about it more in depth.

The most exciting part of this deep dive to me was the introduction of LISP. To be honest, I haven’t really been able to wrap my head around LISP the first couple of times that I saw it. Now, thanks to the Borderless team and Omar Sultan (@omarsultan), I’m going to dig into a lot more in the coming months. I think there are some very interesting issues that LISP can solve, including my IPv6 Gordian Knot.


Tom’s Take

I have to say that I liked Cisco’s approach to the presentations this time.  Giving us discussion time along with a demo allowed us to understand things before we saw them in action.  The extra five minutes did help quite a bit, as it felt like the presenters weren’t as rushed this time.  The “Borderless Idol” style of voting for a presentation to get more info out of was brilliant.  We got to hear about something we wanted to go into depth about, and I even learned something that I plan on blogging about later down the line.  Sure, there was a bit of repetition in a couple of areas, most notably UCS-E, but I can understand how those product managers have invested time and effort into their wares and want to give them as much exposure as possible.  Borderless hits all over the spectrum, so keeping the discussion focused in a specific area can be difficult.  Overall, I would say that Cisco did a good job, even without Ryan Secrest hosting.

Tech Field Day Disclaimer

Cisco was a sponsor of Network Field Day 5.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 5.  In addition, Cisco provided me with a breakfast and lunch at their offices.  They also provided a Moleskine notebook, a t-shirt, and a flashlight toy.  At no time did they ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Aerohive Is Switching Things Up

Screen Shot 2013-03-03 at 12.01.20 PM

I’ve had the good fortune to be involved with Aerohive Networks ever since Wireless Field Day 1.  Since then, I’ve been present for their launch of branch routing.  I’ve also convinced the VAR that I work for to become a partner with them, as I believe that their solutions in the wireless space are of great benefit to my customer base.  It wasn’t long ago that some interesting rumors started popping up.  I noticed that Aerohive started putting out feelers to hire a routing and switching engineer.  There was also a routing and switching class that appeared in the partner training list.  All of these signs pointed to something abuzz on the horizon.

Today, Aerohive is launching a couple of new products.  The first of these is the aforementioned switching line.  Aerohive is taking their expertise in HiveOS and HiveManager and placing it into a rack with 24 cables coming out of it.  The idea behind this came when they analyzed their branch office BR100 and BR200 models and found that a large majority of their remote/branch office customers needed more than the 4 switch ports offered in those models.  Aerohive had a “ah ha” moment and decided that it was time to start making enterprise-grade switches.  The beauty of having a switch offering from a company like Aerohive is that the great management software that is already available for their existing products is now available for wired ports as well.  All of the existing polices that you can create through HiveManager can now be attached to an Aerohive switch port.  The GUI for port role configuration is equally nice:

Screen Shot 2013-03-03 at 4.14.11 PM

In addition, the management dashboard has been extended and expanded to allow for all kinds of information to be pulled out of the network thanks to the visibility that HiveManager has.  You can also customize these views to your heart’s content.  If you frequently find yourself needing to figure out who is monopolizing your precious bandwidth, you’ll be happy with the options available to you.

The first of three switch models, the SR2024, is available today.  It has 24 GigE ports, 8 PoE+ ports, 4 GigE uplinks, and a single power supply.  In the coming months, there will be two additional switches that have full PoE+ capability across 24 and 48 ports, redundant power supplies, and 10 GigE SFP+ uplinks.  For those that might be curious, I asked Abby Strong about the SFPs, and Aerohive will allow you to use just about anyone’s SFPs.  I think that’s a pretty awesome idea.

The other announcement from Aerohive is software based.  One of the common things that is seen in today’s wireless networks is containment of application traffic via multiple SSIDs. If you’ve got management users as well as end users and guests accessing your network all at once, you’ve undoubtedly created policies that allow them to access information differently.  Perhaps management has unfettered access to sites like Facebook while end users can only access it during break hours.  Guests are able to go where they want but are subject to bandwidth restrictions to prevent them from monopolizing resources.  In the past you would need three different SSIDs to accomplish something like this.  Having a lot of broadcasted SSIDs causes a lot of wireless congestion as well as user confusion and increased attack surface.  If only there was a way to have visibility into the applications that the users are accessing and create policies and actions based on that visibility.

Aerohive is also announcing application visibility in the newest HiveOS and HiveManager updates.  This allows administrators to peer deeply into the applications being used by users on the network and create policies on a per-user basis to allow or restrict them based on various criteria.  These policies follow the user through the network up to and including the branch office.  Later in the year, Aerohive will port these policies to their switching line.  However, when you consider that the majority of the users today are using mobile devices first and foremost, this is where the majority of the visibility needs to be.  Administrators can provide user-based controls and reporting to identify bandwidth hogs and take appropriate action to increase bandwidth for critical applications on the fly.  This allows for the most flexibility for both users and administrators.  In truth, it’s all the nice things about creating site-wide QoS policies without all the ugly wrench turning involved with QoS.  How could you not want that?


Tom’s Take

Aerohive’s dip into the enterprise switching market isn’t all that shocking.  They seem to be taking a page from Meraki and offering their software platform on a variety of hardware.  This is great for most administrators because once you’ve learned the software interface and policy creation, porting it between wired switch ports and wireless APs is seemless.  That creates an environment focused on solving problems with business decisions, not on problems with configuration guides.  The Aerohive switches are never going to outperform a Nexus 7000 or a Catalyst 4500.  For what they’ve been designed to accomplish in the branch office, however, I think they’ll fit the bill just fine.  And that’s something to be buzzing about.

Disclaimer

Aerohive provided a briefing about the release of these products.  I spoke with Jenni Adair and Abby Strong.  At no time did Aerohive or their representatives ask for any consideration in the writing of this post, nor were they assured of any of the same.  All of the analysis and opinions represented herein are mine and mine alone.

Are We Living In A Culture Of Beta?

Cisco released a new wireless LAN controller last week, the 5760.  Blake and Sam have a great discussion about it over at the NSA Show.  It’s the next generation of connection speeds and AP support.  It also runs a new version of the WLAN controller code that unifies development with the IOS code team.  That last point generated a bit of conversation between wireless rock stars Scott Stapleton (@scottpstapleton) and George Stefanick (@wirelesssguru) earlier this week.  In particular, a couple of tweets stood out to me:

Overall, the amount of features missing from this new IOS-style code release is bordering on the point of concern.  I understand that porting code to a new development base is never easy.  Being a fan of video games, I’ve had to endure the pain of watching features be removed because they needed to be recoded the “right way” in a code base instead of being hacked together.  Cisco isn’t the only culprit in this whole mess.  Software quality has been going downhill for quite a while now.

Our culture is living in a perpetual state of beta testing.  There’s lot of blame to go around on this.  We as consumers and users want cutting edge technology.  We’re willing to sacrifice things like stability or usability for a little peak at future awesomeness.  Companies are rushing to be the first-to-market on new technologies.  Being the first at anything is an edge when it comes to marketing and, now, patent litigation.  Producers just want to ship stuff.  They don’t really care if it’s finished or not.

Stability can be patched.  Bugs can be coded out in the next release.  What’s important is that we hit our release date.  Who cares if it’s an unrealistic arbitrary day on the calendar picked by the marketing launch team?  We have to be ready otherwise Vendor B will have their widget out and our investors will get mad and sell off the stock!  The users will all leave us for the Next Big Thing and we’ll go out of business!!!  

Okay, maybe not every conversation goes like that, but you can see the reasoning behind it.

Google is probably the worst offender of the bunch here.  How long was GMail in beta?  As it turns out…five years.  I think they probably worked out most of the bugs of getting electronic communications from one location to another after the first nine months or so.   Why keep it in beta for so long?  I think it was a combination of laziness and legality.  Google didn’t really want to support GMail beyond cursory forum discussion or basic troubleshooting steps.  By keeping it “beta” for so long, they could always fall back to the excuse that it wasn’t quite finished so it wasn’t supposed to be in production.  That also protected them from the early adopters that moved their entire enterprise mail system into GMail.  If you lost messages it wasn’t a big deal to Google.  After all, it’s still in beta, right?  Google’s reasoning for finally dropping the beta tag after five years was that it didn’t fit the enterprise model that Google was going after.  Turns out that the risk analysts really didn’t like having all their critical communication infrastructure running through a project with a “beta” tag on it, even if GMail had ceased being beta years before.

Software companies thrive off of getting code into consumer’s hands.  Because we’ve effectively become an unpaid quality assurance (QA) platform for them.  Apple beta code for iOS gets leaked onto the web hours after it’s posted to the developer site.  There’s even a cottage industry of sites that will upload your device’s UDID to a developer account so you can use the beta code.  You actually pay money to someone for the right to use code that will be released for free in a few months time.  In essence, you are paying money for a free product in order to find out how broken it is.  Silly, isn’t it?  Think about Microsoft.  They’ve started offering free Developer Preview versions of new Windows releases to the public.  In previous iterations, the hardy beta testers of yore would get a free license for the new version as a way of saying thanks for enduring a long string of incremental builds and constant reloading of the OS only to hit a work-stopping bug that erased your critical data. Nowadays, MS releases those buggy builds with a new name and people happily download them and use them on their hardware with no promise of any compensation.  Who cares if it breaks things?  People will complain about it and it will get fixed.  No fuss, no muss.  How many times have your heard someone say “Don’t install a new version of Windows until the first service pack comes out”?  It’s become such a huge deal that MS never even released a Service Pack for Windows 7, just an update rollup.  Even Cisco’s flagship NX-OS on the Nexus 7000 series switches has been accused of being a beta in progress by bloggers such as Greg Ferro (@etherealmind) in this Network Computing article (comment replies).  If the core of our data center is running on buggy unreliable code, what hope have we for the desktop OS or mobile platform?

That’s not to say that every company rushes products out the door.  Two of the most stalwart defenders of full proper releases are Blizzard and Valve.  Blizzard is notorious for letting release dates slip in order to ensure code quality.  Diablo 2 was delayed several times between the original projected date of December 1998 and its eventual release in 2000 and went on to become one of the best selling computer games of all time.  Missing an unrealistic milestone didn’t hurt them one bit.  Valve has one of the most famous release strategies in recent memory.  Every time someone asks found Gabe Newell when Valve will release their next big title, his response is almost always the same – “When it’s done.”  Their apparent hesitance to ship unfinished software hasn’t run them out of business yet.  By most accounts, they are one of the most respected and successful software companies out there.  Just goes to show that you don’t have to be a slave to a release date to make it big.

Tom’s Take

The culture of beta is something I’m all too familiar with.  My iDevices run beta code most of the time.  My laptop runs developer preview software quite often.  I’m always clamoring for the newest nightly build or engineering special.  I’ve mellowed a bit over the years as my needs have gone from bleeding edge functionality to rock solid stability.  I still jump the gun from time to time and break things in the name of being the first kid on my block to play with something new.  However, I often find that when the final stable release comes out to much fanfare in the press, I’m disappointed.  After all, I’ve already been using this stuff for months.  All you did was make it stable?  Therein lies the rub in the whole process.  I’ve survived months of buggy builds, bad battery life, and driver incompatibility only to see the software finally pushed out the door and hear my mom or my wife complain that it changed the fonts on an application or the maps look funny now.  I want to scream and shout and complain that my pain was more than you could imagine.  That’s when I usually realize what’s really going on.  I’m an unpaid employee fixing problems that should never even be in the build in the first place.  I’ve joked before about software release names, but it’s sadly more true than funny.  We spend too much time troubleshooting prerelease software.  Sometimes the trouble is of our own doing.  Other times it’s because the company has outsourced or fired their whole QA department.  In the end, my productivity is wasted fixing problems I should never see.  All because our culture now seems to care more about how shiny something is and less about how well it works.

BYOD vs MDM – Who Pays The Bill?

Generic Mobile Devices

There’s a lot of talk around now about the trend of people bringing in their own laptops and tablets and other devices to access data and do their jobs.  While most of you (including me) call this Bring Your Own Device (BYoD), I’ve been hearing a lot of talk recently about a different aspect of controlling mobile devices.  Many of my customers have been asking me about Mobile Device Management (MDM).  MDM is getting mixed into a lot of conversations about controlling the BYoD explosion.

Mobile Device Management (MDM) refers to the process of controlling the capabilities of a device via a centralized control point, whether it be in the cloud or on premises.  MDM can restrict functions of a device, such as the camera or the ability to install applications.  It can also restrict which data can be downloaded and saved onto a device.  MDM also allows device managers to remotely lock the device in the event that it is lost or even remotely wipe the device should recovery be impossible.  Vendors are now pushing MDM is a big component of their mobility offerings.  Every week, it seems like some new vendor is pushing their MDM offering, whether it be a managed service software company, a wireless access point vendor, or even a dedicated MDM provider.  MDM is being pushed as the solution to all your mobility pain points.  There’s one issue though.

MDM is a very intrusive solution for mobile devices.  A good analogy might be the rules you have for your kids at home.  There are many things they are and aren’t allowed to do.  If they break the rules, there are consequences and possible punishments.  Your kids have to follow your rules if they live under your roof.  Such is the way for MDM as well.  Most MDM vendors that I’ve spoken to in the last three months take varying degrees of intrusion to the devices.  One Windows Mobile provider started their deployment process with a total device wipe before loading an approved image onto the mobile device.  Others require you to trust specific certificates or enroll in special services.  If you run Apple’s iOS and designate the device as a managed device in iOS 6 to get access to certain new features like the global proxy setting, you’ll end up having a wiped device before you can manage it.  Services like MobileIron can even give administrators the ability to read any information on the device, regardless of whether it’s personal or not.

That level of integration into a device is just too much for many people bringing their personal devices into a work environment.  They just want to be able to check their email from their phone.  They don’t want a sneaky admin reading their text messages or even wiping their entire phone via a misconfigured policy setting or a mistaken device loss.  Could you image losing all your pictures or your bank account info because Exchange had a hiccup?  And what about pushing MDM polices down to disable your camera due to company policy or disable your ability to make in-app purchases from your app repository of choice?  How about setting a global proxy server so you are restricted from browsing questionable material from the comfort of your own home?  If you’re like me, any of those choices make me cringe a little.

That’s why BYoD polices are important.  They function more like having your neighbor’s children over at your house.  While you may have rules for your children, the neighbor’s kids are just vistors.  You can’t really punish them like you’d punish your own kids.  Instead, you make what rules you can to prevent them from doing things they aren’t supposed to do.  In many cases, you can send the neighbor’s kids to a room with your own kids to limit the damage they can cause.  This is very much in line with the way we treat devices with BYoD settings.  We try to authenticate users to ensure they are supposed to be accessing data on our network.  We place data behind access lists that try to determine location or device type.  We use the network as the tool to limit access to data as opposed to intruding on the device.

Both BYoD and MDM are needed in a corporate environment to some degree. The key to figuring out which needs to be applied where can be boiled down to one easy question:

Who paid for your device?

If the user bought their device, you need to be exploring BYoD polices as your primary method of securing the network and enabling access.  Unless you have a very clearly defined policy in place for device access, you can’t just assume you have the right to disable half a user’s device functions and then wipe it whenever you feel the need.  Instead, you need to focus your efforts on setting up rules that they should follow and containing their access to your data with access lists and user authentication.  On the other hand, if the company paid for your tablet then MDM is the likely solution in mind.  Since the device belongs to the corporation, they are will within their rights to do what they would like with it.  Use it just like you would a corporate laptop or an issued Blackberry instead of a personal iPhone.  Don’t be shocked if it gets wiped or random features get turned off due to company policy.

Tom’s Take

When it’s time to decide how best to manage your devices, make sure to pull out all those old credit card receipts.  If you want to enable MDM on all your corporate phones and tablets, be sure to check out http://enterpriseios.com/ for a list of all the features supported in a given MDM provider for both iOS and other OSes like Android or Blackberry.  If you didn’t get the bill for that tablet, then you probably want to get in touch with your wireless or network vendor to start exploring the options available for things like 802.1X authentication or captive portal access.  In particular, I like some of the solutions available from Aerohive and Aruba’s ClearPass.  You’re going to want both MDM and BYoD policies in your environment to be sure your devices are as useful as possible while still being safe and protecting your network.  Just remember to back it all up with a very clear, detailed written use policy to ensure there aren’t any legal ramifications down the road from a wiped device or a lost phone causing a network penetration.  That’s one bill you can do without.

Cisco To Buy Meraki?

If you’re in the tech industry, it never seems like there’s any downtime. That was the case today all thanks to my friend Greg Ferro (@etherealmind). I was having breakfast when this suddenly scrolled up on my Twitter feed:

After I finished spitting out my coffee, I started searching for confirmation or indication to the contrary. Stephen Foskett (@SFoskett) provided it a few minutes later by finding the following link:

http://blogs.cisco.com/news/cisco-announces-intent-to-acquire-meraki/

EDIT: As noted in the comments below, Brandon Bennett (@brandonrbennett) found a copy of the page in Google’s Webcache. The company in the linked page says “Madras”, but the rest of the info is all about Meraki. I’m thinking Madras is just a placeholder.

For the moment, I’m going to assume that this is a legitimate link that is really going to point to something soon. I’m not going to assume Cisco has a habit of creating “Cisco announces intent to acquire X Company” pages out of habit, like this famous Dana Carvey SNL video. In that case, the biggest question now becomes…

Why Meraki?

I’ll admit, I was shaking my head for a bit on this one. Cisco doesn’t buy companies because of hardware technology. They’ve got R&D labs that can replicate pretty much anything under the sun given enough time. Cisco instead usually purchases for innovative software platforms. They originally bought Airespace for the controller architecture and managment software that originally became WCS. The silicon isn’t as important, since Cisco makes their own.

Meraki doesn’t really make anything innovative from a hardware front. Their APs use reference architecture. Their switch and firewall offerings are also pretty standard fare with basic 10/100/1000 connectivity and are likely based on Broadcom reference designs as well. What exactly draws in a large buyer like Cisco? What is unique among all those products?

Cisco’s Got Its Head In The Clouds

The single thing that is similar across the whole Meraki line is the software. I talked a bit about it in my Wireless Field Day 2 post on Meraki. Their single management platform allows them to manage switches, firewalls, and wireless in one single application. You can see all the critical information that your switches are pumping out and program them accordingly. The demo I saw at WFD2 was isolating a hungry user downloading too much data with a combination of user identification and pushing an ACL down to that user limiting their bandwidth for certain kinds of traffic without totally locking that person out of the network. That’s the kind of thing that Cisco is looking for.

With the announcement of onePK, Cisco really wants to show off what they can do when they start plugging APIs into their switches and routers. But simply opening an API doesn’t do anything. You’ve got to have some kind of software program to collect data from the API and then push instructions back down to it to accomplish a goal. And if you can decentralize that control to somewhere in the cloud, you’ve got a recipe for the marketing people to salivate over. For now, I thought that would be some kind of application borne out of the Cisco Prime family.

If the Meraki acquisition comes to fruition, Meraki’s platform will likely be rebranded as a member of the Cisco Prime family and used for this purpose. It will likely be positioned initially towards the SMB and medium enterprise customers. In fact, I’ve got three or four use cases for this management software on Cisco hardware today with my customers. This would do a great job of replacing some of the terrible management platforms I’ve seen in the past, like Cisco Configuration Assisstant (CCA) and the unmentioned product Cisco was pitching as a hands-off way to manage sub 50-node networks. By allowing the Meraki management software to capture data from Cisco devices, you can have a proven portal to manage your switches and APs. Add in the ability to manage other SMB devices, such as a UC 500 or a small 800-series router and you’ve got a smooth package you can sell to your customers for a yearly fee. Ah ha! Recurring, cloud based income! That’s just icing on the cake.

EDIT: 6:48 CST – Confirmed by a Cisco press release and as well by Techcrunch and CRN.


Tom’s Take

Ruckus just had their IPO. It was time for a shake up in the upstart wireless market. Meraki was the target that most people had in mind. I’d been asked by several traditional networking vendors recently who I thought was going to be the next wireless company to be acquired, and every time my money landed on Meraki. They have a good software platform that helps them manage inexpensive devices. All their engineering goes into the software. By moving away from pure wireless products, they’ve raised their profile with their competitors. I never seriously expected Meraki to dethrone Cisco or Brocade with their switch offerings. Instead, I saw the Meraki switches and firewalls as an add-on offering to compliment their wireless deployments. You could have a whole small office running Meraki wireless, wired, and security deployments. Getting the ability to manage all those devices easily from one web-based application must have appealed to someone at Cisco M&A. I remember from my last visit to the Meraki offices that their name is an untranslatable word from Greek that means “to do something with intense passion.” It also can mean “to have a place at the table.” It does appear that Meraki found a place at a very big table indeed.