The Trap of Net Neutrality

net-neutrality

The President recently released a video and statement urging the Federal Communications Commission (FCC) to support net neutrality and ensure that there will be no “pay for play” access to websites or punishment for sites that compete against a provider’s interests.  I wholeheartedly support the idea of net neutrality.  However, I do like to stand on my Devil’s Advocate soapbox every once in a while.  Today, I want to show you why a truly neutral Internet may not be in our best interests.

Lawful Neutral

If the FCC mandates a law that the Internet must remain neutral, it will mean that all traffic must be treated equally.  That’s good, right?  It means that a provider can’t slow my Netflix stream or make their own webmail service load faster than Google or Yahoo.  It also means that the provider can’t legally prioritize packets either.

Think about that for a moment.  We, as network and voice engineers, have spent many an hour configuring our networks to be as unfair as possible.  Low-latency queues for voice traffic.  Weighted fair queues for video and critical applications.  Scavenger traffic classes and VLANs for file sharers and other undesirable bulk noise.  These plans take weeks to draw up and even longer to implement properly.  It helps us make sense out of the chaos in the network.

By mandating a truly neutral net, we are saying that those carefully marked packets can’t escape from the local network with their markings intact.  We can’t prioritize voice packets once they escape the edge routers.  And if we move applications to the public cloud, we can’t ensure priority access.  Legally, the providers will be forced to remark all CoS and DSCP values at the edge and wash their hands of the whole thing.

And what about provider MPLS circuits?  If the legally mandated neutral provider is administering your MPLS circuits (as they do in small and medium enterprise), can they copy the DSCP values to the MPLS TE field before forwarding the packet?  Where does the law stand on prioritizing private traffic transiting a semi-public link?

Chaotic Neutral

The idea of net neutrality is that no provider should have the right to decide how your traffic should be handled.  But providers will extend that idea to say they can’t deal with any kind of marking.  They won’t legally be able to offer you differentiated service even if you were wiling to pay for it.  That’s the double-edge sword of neutrality.

You can be sure that the providers will already have found a “solution” to the problem.  Today, quality of service (QoS) only becomes an issue when the link becomes congested.  Packets don’t queue up if there’s bandwidth available to use.  So the provider solution is simple.  If you need differentiated service, you need to buy a bigger pipe.  Over provision your WAN circuits!  We can’t guarantee delivery unless you have more bandwidth than you need!  Who cares what the packets are marked?  Which, of course, leads to a little gem from everyone’s favorite super villain:

SyndromeEF

Of course, the increased profits from these services will line the pockets of the providers instead of going to build out the infrastructure necessary to support these overbuilt networks.  The only way to force providers to pony up the money to build out networks is to make it so expensive to fail that the alternative is better.  That requires complex negotiation and penalty-laden, iron-clad service level agreements (SLAs).

The solution to the issue of no prioritized traffic is to provide a list of traffic that should be prioritized.  Critical traffic like VoIP should be allowed to be expedited, as the traffic characteristics and protections we afford it make sense.  Additionally, traffic destined for a public cloud site that function as internal traffic of a company should be able to be prioritized across the provider network.  Tunneling or other forms of traffic protection may be necessary to ensure this doesn’t interfere with other users.  Exempt traffic should definitely be the exception, not the rule.  And it should never fall on the providers to determine which traffic should be exempted from neutrality rules.


Tom’s Take

Net neutrality is key to the future of society.  The Internet can’t function properly if someone else with a vested interest in profits decides how we consume content.  It’s like the filter bubble of Google.  A blind blanket policy doesn’t do us any good, either.  Everyone involved in networking knows there are types of traffic that can be prioritized without having a detrimental effect.  We need to make smart decisions about net neutrality and know when to make exceptions.  But that power needs to be in the hands of the users and customers.  They will make decisions in their best interest.  The providers should have the capability to implement the needs of their customers.  Only then will the Internet be truly neutral.

Overlay Transport and Delivery

amazon-com-boxes

The difference between overlay networks and underlay networks still causes issues with engineers everywhere.  I keep trying to find a visualization that boils everything down to the basics that everyone can understand.  Thanks to the magic of online ordering, I think I’ve finally found it.

Candygram for Mongo

Everyone on the planet has ordered something from Amazon (I hope).  It’s a very easy experience.  You click a few buttons, type in a credit card number, and a few days later a box of awesome shows up on your doorstep.  No fuss, no muss.  Things you want show up with no effort on your part.

Amazon is the world’s most well-known overlay network.  When you place an order, a point-to-point connection is created between you and Amazon.  Your item is tagged for delivery to your location.  It’s addressed properly and finds its way to you almost by magic.  You don’t have to worry about your location.  You can have things shipped to a home, a business, or a hotel lobby halfway across the country.  The magic of an overlay is that the packets are going to get delivered to the right spot no matter what.  You don’t need to worry about the addressing.

That’s not to say there isn’t some issue with the delivery.  With Amazon, you can pay for expedited delivery.  Amazon Prime members can get two-day shipping for a flat fee.  In overlays, your packets can take random paths depending on how the point-to-point connection is built.  You can pay to have a direct path provided the underlay cooperates with your wishes.  But unless a full mesh exists, your packet delivery is going to be at the mercy of the most optimized path.

Mongo Only Pawn In Game Of Life

Amazon only works because of the network of transports that live below it.  When you place an order, your package could be delivered any number of ways.  UPS, FedEx, DHL, and even the US Postal Service can be the final carrier for your package.  It’s all a matter of who can get your package there the fastest and the cheapest.  In many ways, the transport network is the underlay of physical shipping.

Routes are optimized for best forwarding.  So are UPS trucks.  Network conditions matter a lot to both packets and packages.  FedEx trucks stuck in traffic jams at rush hour don’t do much good.  Packets that traverse slow data center interconnects during heavy traffic volumes risk slow packet delivery.  And if the road conditions or cables are substandard?  The whole thing can fall apart in an instant.

Underlays are the foundation that higher order services are built on.  Amazon doesn’t care about roads.  But if their shipping times get drastically increased due to deteriorating roadways you can bet their going to get to the bottom of it.  Likewise, overlay networks don’t directly interact with the underlay but if packet delivery is impacted people are going to take a long hard look at what’s going on down below.

Tom’s Take

I love Amazon.  It beats shopping in big box stores and overpaying for things I use frequently.  But I realize that the infrastructure in place to support the behemoth that is Amazon is impressive.  Amazon only works because the transport system in place is optimized to the fullest.  UPS has a computer system that eliminates left turns from driver routes.  This saves fuel even if it means the routes are a bit longer.

Network overlays work the same way.  They have to rely on an optimized underlay or the whole system crashes in on itself.  Instead of worrying about the complexity of introducing an overlay on top of things, we need to work on optimizing the underlay to perform as quickly as possible.  When the underlay is optimized, the whole thing works better.

Is LISP The Answer to Multihoming?

LISPMultihoming

One of the biggest use cases for Locator/Identifier Separation Protocol (LISP) that will benefit small and medium enterprises is the ability to multihome to different service providers without needing to run Border Gateway Protocol (BGP). It’s the answer to a difficult and costly problem. But is it really the best solution?

Current SMB users may find themselves in a situation where they can’t run BGP. Perhaps their upstream ISP blocks the ability to establish a connection. In many cases, business class service is required with additional fees necessary to multihome. In order to take full advantage of independent links to different ISPs, two (or more) NAT configurations are required to send and receive packets correctly across the balanced connections. While technically feasible, it’s a mess to troubleshoot. It also doesn’t scale when multiple egress connections are configured. And more often that not, the configuration to make everything work correctly exists on a single router in the network, eliminating the advantages of multihoming.

LISP seeks to solve this by using a mapping database to send packets to the correct Ingress Tunnel Router (ITR) without the need for BGP. The diagram of a LISP packet looks a lot like an overlay. That’s because it is in many ways. The LISP packets are tunneled from an Egress Tunnel Router (ETR) to a LISP speaking decapsulation point. Depending on the deployment policies of LISP for a given ISP, it could be the next hop router on a connection. It could also be a router several hops upstream. LISP is capable of operating over non-LISP speaking connections, but it does eventually need decapsulation.

Where’s the Achille’s Heel in this design? LISP may solve the issue without BGP, but it does introduce the need for the LISP session to terminate on a single device (or perhaps a group of devices). This creates issues in the event the link goes down and the backup link needs to be brought online. That tunnel state won’t be preserved across the failover. It’s also a gamble to assume your ISP will support LISP. Many large ISPs should give you options to terminate LISP connections. But what about the smaller ISP that services many SMB companies? Does the local telephone company have the technical ability to configure a LISP connection? Let along making it redundant and highly available?

Right Tool For The Job

I think back to a lesson my father taught me about tools. He told me, “Son, you can use a screwdriver as a chisel if you try hard enough. But you’re better off spending the money to buy a chisel.” The argument against using BGP to multihome ISP connections has always come down to cost. I’ve gotten into heated discussions with people that always come back to the expense of upgrading to a business-class connection to run BGP or ensure availability. NAT may allow you to multihome across two residential cable modems, but why do you need 99.999% uptime across those two if you’re not willing to pay for it?

LISP solves one issue only to introduce more. I see LISP being misused the same way NAT has been. LISP was proposed by David Meyer to solve the exploding IPv4 routing table and the specter of an out-of-control IPv6 routing table.  While multihoming is certainly another function that it can serve, I don’t think that was Meyer’s original idea.  BGP might not be perfect, but it’s what we’ve got.  We’ve been using it for a while and it seems to get the job done.  LISP isn’t going to replace BGP by a long shot.  All you have to do it look at LISP ALternate Topology (LISP-ALT), which was the first iteration of the mapping database before the current LISP-TREE.  Guess what LISP-ALT used for mapping?  That’s right, BGP.


Tom’s Take

LISP multihoming for IPv4 or IPv6 in SMEs isn’t going to fix the problem we have today with trying to create redundancy from consumer-grade connections.  It is another overlay that will create some complexity and eventually not be adopted because there are still enough people out there that are willing to forgo an interesting idea simply because it came from Cisco.  IPv6 multihoming can be fixed at the protocol level.  Tuning router advertisements or configuring routes at the edge with BGP will get the job done, even if it isn’t as elegant as LISP.  Using the right tool for the right job is the way to make multihoming happen.

CCIE Version 5: Out With The Old

Cisco announced this week that they are upgrading the venerable CCIE certification to version five.  It’s been about three years since Cisco last refreshed the exam and several thousand people have gotten their digits.  However, technology marches on.  Cisco talked to several subject matter experts (SMEs) and decided that some changes were in order.  Here are a few of the ones that I found the most interesting.

CCIEv5 Lab Schedule

Time Is On My Side

The v5 lab exam has two pacing changes that reflect reality a bit better.  The first is the ability to take some extra time on the troubleshooting section.  One of my biggest peeves about the TS section was the hard 2-hour time limit.  One of my failing attempts had me right on the verge of solving an issue when the time limit slammed shut on me.  If I only had five more minutes, I could have solved that problem.  Now, I can take those five minutes.

The TS section has an available 30 minute overflow window that can be used to extend your time.  Be aware that time has to come from somewhere, since the overall exam is still eight hours.  You’re borrowing time from the configuration section.  Be sure you aren’t doing yourself a disservice at the beginning.  In many cases, the candidates know the lab config cold.  It’s the troubleshooting the need a little more time with.  This is a welcome change in my eyes.

Diagnostics

The biggest addition is the new 30-minute Diagnostic section.  Rather than focusing on problem solving, this section is more about problem determination.  There’s no CLI.  Only a set of artifacts from a system with a problem: emails, log files, etc.  The idea is that the CCIE candidate should be an expert at figuring out what is wrong, not just how to fix it.  This is more in line with the troubleshooting sections in the Voice and Security labs.  Parsing log files for errors is a much larger part of my time than implementing routing.  Teaching candidates what to look for will prevent problems in the future with newly minted CCIEs that can diagnose issues in front of customers.

Some are wondering if the Diagnostic section is going to be the new “weed out” addition, like the Open Ended Questions (OEQs) from v3 and early v4.  I see the Diagnostic section as an attempt to temper the CCIE with more real world needs.  While the exam has never been a test of ideal design, knowing how to fix a non-ideal design when problems occur is important.  Knowing how to find out what’s screwed up is the first step.  It’s high time people learned how to do that.

Be Careful What You Wish For

The CCIE v5 is seeing a lot of technology changes.  The written exam is getting a new section, Network Principles.  This serves to refocus candidates away from Cisco specific solutions and more toward making sure they are experts in networking.  There’s a lot of opportunity to reinforce networking here and not idle trivia about config minimums and maximums.  Let’s hope this pays off.

The content of the written is also being updated.  Cisco is going to make sure candidates know the difference between IOS and IOS XE.  Cisco Express Forwarding is going to get a focus, as is ISIS (again).  Given that ISIS is important in TRILL this could be an indication of where FabricPath development is headed.  The written is also getting more IPv6 topics.  I’ll cover IPv6 in just a bit.

The biggest change in content is the complete removal of frame relay.  It’s been banished to the same pile as ATM and ISDN.  No written, no lab.  In it’s place, we get Dynamic Multipoint VPN (DMVPN).  I’ve talked about why Frame Relay is on the lab before.  People still complained about it.  Now, you get your wish.  DMVPN with OSPF serves the same purpose as Frame Relay with OSPF.  It’s all about Stupid Router Tricks.  Using OSPF with DMVPN requires use of mGRE, which is a Non-Broadcast Multi-Access (NBMA) network.  Just like Frame Relay.  The fact that almost every guide today recommends you use EIGRP with DMVPN should tell you how hard it is to do.  And now you’re forced to use OSPF to simulate NBMA instead of Frame Relay.  Hope all you candidates are happy now.

vCCIE

The lab is also 100% virtual now.  No physical equipment in either the TS or lab config sections.  This is a big change.  Cisco wants to reduce the amount of equipment that needs to be physically present to build a lab.  They also want to be able to offer the lab in more places than San Jose and RTP.  Now, with everything being software, they could offer the lab at any secured PearsonVUE testing center.  They’ve tried in the past, but the access requirements caused some disaster.  Now, it’s all delivered in a browser window.  This will make remote labs possible.  I can see a huge expansion of the testing sites around the time of the launch.

This also means that hardware-specific questions are out.  Like layer 2 QoS on switches.  The last reason to have a physical switch (WRR and SRR queueing) is gone.  Now, all you are going to get quizzed on is software functionality.  Which probably means the loss of a few easy points.  With the removal of Frame Relay and L2 QoS, I bet that services section of the lab is going to be really fun now.

IPv6 Is Real

Now, for my favorite part.  The JNCIE has had a robust IPv6 section for years.  All routing protocols need to be configured for IPv4 and IPv6.  The CCIE has always had a separate IPv6 section.  Not any more.  Going forward in version 5, all routing tasks will be configured for v4 and v6.  Given that RIPng has been retired to the written exam only (finally), it’s a safe bet that you’re going to love working with OSPFv3 and EIGRP for IPv6.

I think it’s great that Cisco has finally caught up to the reality of the world.  If CCIEs are well versed in IPv6, we should start seeing adoption numbers rise significantly.  Ensuring that engineers know to configure v4 and v6 simultaneously means dual stack is going to be the preferred transition method.  The only IPv6-related thing that worries me is the inclusion of an item on the written exam: IPv6 Network Address Translation.  You all know I’m a huge fan of NAT.  Especially NAT66, which is what I’ve been told will be the tested knowledge.

Um, why?!? 

You’ve removed RIPng to the trivia section.  You collapsed multicast into the main routing portions.  You’re moving forward with IPv6 and making it a critical topic on the test.  And now you’re dredging up NAT?!? We don’t NAT IPv6.  Especially to another IPv6 address.  Unique Local Addresses (ULA) is about the only thing I could see using NAT66.  Ed Horley (@EHorley) thinks it’s a bad idea.  Ivan Pepelnjak (@IOSHints) doesn’t think fondly of it either, but admits it may have a use in SMBs.  And you want CCIEs and enterprise network engineers to understand it?  Why not use LISP instead?  Or maybe a better network design for enterprises that doesn’t need NAT66?  Next time you need an IPv6 SME to tell you how bad this idea is, call me.  I’ve got a list of people.


Tom’s Take

I’m glad to see the CCIE update.  Getting rid of Frame Relay and adding more IPv6 is a great thing.  I’m curious to see how the Diagnostic section will play out.  The flexible time for the TS section is way overdue.  The CCIE v5 looks to be pretty solid on paper.  People are going to start complaining about DMVPN.  Or the lack of SDN-related content.  Or the fact that EIGRP is still tested.  But overall, this update should carry the CCIE far enough into the future that we’ll see CCIE 60,000 before it’s refreshed again.

More CCIE v5 Coverage:

Bob McCouch (@BobMcCouch) – Some Thoughts on CCIE R&S v5

Anthony Burke (@Pandom_) – Cisco CCIE v5

Daniel Dib (@DanielDibSWE) – RS v5 – My Thoughts

INE – CCIE R&S Version 5 Updates Now Official

IPExpert – The CCIE Routing and Switching (R&S) 5.0 Lab Is FINALLY Here!

Avaya and the Magic of SPB

Avaya_logo-wpcf_200x57

I was very interested to hear from Avaya at Interop New York.  They were the company I knew the least about.  I knew the most about them from the VoIP side of the house, but they’ve been coming on strong with networking as well.  They are one of the biggest champions of 802.1aq, more commonly known as Shortest Path Bridging (SPB).  You may remember that I wrote a bit about SPB in the past and referred to it as the Betamax of networking fabric technologies.  After this presentation, I may be forced to eat my words to a degree.

Paul Unbehagen really did a great job with this presentation.  There were no slides, but he kept the attention of the crowd.  The whiteboard supported his message.  While informal, there was a lot of learning.  Paul knows SPB.  It’s always great to learn from someone that knows the protocol.

Multicast Magic

One of the things I keyed on during the presentation was the way that SPB deals with multicast.  Multicast is a huge factor in Ethernet today.  So much so that even the cheapest SOHO Ethernet switch has a ton of multicast optimization.  But multicast as implemented in enterprises is painful.  If you want to make an engineer’s blood run cold, walk up and whisper “PIM“.  If you want to watch a nervous breakdown happen in real time, follow that up with “RPF“.

RPF checks in multicast PIM routing are nightmarish.  It would be wonderful to get rid of RPF checks to eliminate any loops in the multicast routing table.  SPB accomplishes that by using a Dijkstra algorithm.  The same algorithm that OSPF and IS-IS use to compute paths.  Considering the heavily roots of IS-IS in SPB, that’s not surprising.  The use of Dijkstra means that additional receivers on a multicast tree don’t negatively effect the performance of path calculation.

I’ve Got My IS-IS On You

In fact, one of the optimized networks that Paul talked about involved surveillance equipment.  Video surveillance units that send via multicast have numerous endpoints and only a couple of receivers on the network.  In other words, the exact opposite problem multicast was designed to solve.  Yet, with SPB you can create multicast distribution networks that allow additional end nodes to attach to a common point rather than talking back to a rendezvous point (RP) and getting the correct tree structure from there.  That means fast convergence and simple node addition.

SPB has other benefits as well.  It supports 16.7 million ISIDs, which are much like VLANs or MPLS tags.  This means that networks can grow past the 4,096 VLAN limitation.  It looks a lot like VxLAN to me.  Except for the reliance on multicast and lack of a working implementation.  SPB allows you to use a locally significant VLAN for a service and then defined an ISID that will transport across the network to be decapsulated on the other side in a totally different VLAN that is attached to the ISID.  That kind of flexibility is key for deployments in existing, non-green field environments.

If you’d like to learn more about Avaya and their SPB technology, you can check them out at http://www.avaya.com.  You can also follow them on Twitter as @Avaya.


Tom’s Take

Paul said that 95% of all SPB implementations are in the enterprise.  That shocked me a bit, as I always thought of SPB as a service provider protocol.  I think the key comes down to something Paul said in the video.  When we are faced with applications or additional complexity today, we tend to just throw more headers at the problem.  We figured that wrapping the whole mess in a new tag or a new tunnel will take care of everything.  At least until it all collapses into a puddle.  Avaya’s approach with SPB was to go back down to the lower layers and change the architecture of things to optimize everything and make it work the right way on all kinds of existing hardware.  To quote Paul, “In the IEEE, we don’t build things for the fun it.”  That means SPB has their feet grounded in the right place.  Considering how difficult things can be in data center networking, that’s magical indeed.

Tech Field Day Disclaimer

Avaya was a presenter at the Tech Field Day Interop Roundtable.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.

IPv4? That Will Cost You

ipvdollar

After my recent articles on Network Computing, I got an email from Fred Baker.  To say I was caught off guard was an understatement.  We proceeded to have a bit of back and forth about IPv6 deployment by enterprises.  Well, it was mostly me listening to Fred tell me what he sees in the real world.  I wrote about some of it over on Network Computing.

One thing that Fred mentioned in a paragraph got me thinking.  When I heard John Curran of ARIN speak at the Texas IPv6 Task Force meeting last December, he mentioned that the original plan for IPv6 (then IPng) deployment involved rolling it out in parallel with IPv4 slowly to ensure that we had all the kinks worked out before we ran out of IPv4 prefixes.  This was around the time the World Wide Web was starting to take off but before RFC 1918 and NAT extended the lifetime of IPv4.  Network engineers took a long hard look at the plans for IPv6 and rightfully concluded that it was more expensive to run IPv6 in conjunction with IPv4 and instead it was more time and cost effective to just keep running IPv4 until the day came that IPv6 transition was necessary.

You’ve probably heard me quote my old Intro to Database professor, Dr. Traci Carte.  One of my favorite lessons from her was “The only way to motivate people is by fear or by greed.”  Fred mentioned that an engineer at an ISP mentioned to him that he wanted to find a way to charge IPv4 costs back to the vendors.  This engineer wants to move to a pure IPv6 offering unless there is a protocol or service that requires IPv4.  In that case, he will be more than willing to enable it – for a cost.  That’s where the greed motivator comes into play.  Today, IPv6 is quickly becoming equivalent in cost to IPv4.  The increased complexity is balanced out by the lack of IPv4 prefixes.

What if we could unbalance the scales by increasing the cost of IPv4?  It doesn’t have to cost $1,000,000 per prefix.  But it does have to be a cost big enough to make people seriously question their use of IPv4.  Some protocols are never going to be ported to have IPv6 versions.  By making the cost of using them higher, ISPs and providers can force enterprises and small-to-medium enterprises (SMEs) to take a long hard look at why they are using a particular protocol and whether or not a new v6-enabled version would be a better use of resources.  In the end, cheaper complexity will win out over expensive ease.  The people in charge of the decisions don’t typically look at man-hours or support time.  They merely check the bottom line.  If that bottom line looks better with IPv6, then we all win in the end.

I know that some of you will say that this is a hair-brained idea.  I would counter with things like Carrier-Grade NAT (CGN).  CGN is an expensive, complicated solution that is guaranteed to break things, at least according to Verizon.  Why would you knowingly implement a hotfix to IPv4 knowing what will break simply to keep the status quo around for another year or two?  I would much rather invest the time and effort in a scaling solution that will be with us for another 10 years or more.  Yes, things my break by moving to IPv6.  But we can work those out through troubleshooting.  We know how things are supposed to work when everything is operating correctly.  Even in the best case CGN scenario we know a lot of things are going to break.  And end-to-end communications between nodes becomes one step further removed from the ideal.  If IPv4 continuance solutions are going to drain my time and effort they become as costly (or moreso) that implementing IPv6.  Again, those aren’t costs that are typically tracked by bean counters unless they are attached to a billable rate or to an opportunity cost of having good engineering talent unavailable for key projects.


Tom’s Take

Dr. Carte’s saying also included a final line about motivating people via a “well reasoned argument”.  As much as I love those, I think the time for reason is just about done.  We’ve cajoled and threatened all we can to convince people that the IPv4 sky has fallen.  I think maybe it’s time to start aiming for the pocketbook to get IPv6 moving.  While the numbers for IPv6 adoption are increasing, I’m afraid that if we rest on our laurels that there will be a plateau and eventually the momentum will be lost.  I would much rather spend my time scheming and planning to eradicate IPv4 through increased costs than I would trying to figure out how to make IPv4 coexist with IPv6 any longer.

Accelerating E-Rate

ERateSpeed

Right after I left my job working for a VAR that focused on K-12 education and the federal E-Rate program a funny thing happened.  The president gave a speech where he talked about the need for schools to get higher speed links to the Internet in order to take advantage of new technology shifts like cloud computing.  He called for the FCC and the Universal Service Administration Company (USAC) to overhaul the E-Rate program to fix deficiencies that have cropped up in the last few years.  In the last couple of weeks a fact sheet was released by the FCC to outline some of the proposed changes.  It was like a breath of fresh air.

Getting Up To Speed

The largest shift in E-Rate funding in the last two years has been in applying for faster Internet circuits.  Schools are realizing that it’s cheaper to host servers offsite either with software vendors or in clouds like AWS than it is to apply for funding that may never come and buy equipment that will be outdated before it ships.  The limiting factor has been with the Internet connection of these schools.  Many of them are running serial T-1 circuits even today.  They are cheap and easy to install.  Enterprising ISPs have even started creating multilink PPP connections with several T-1 links to create aggregate bandwidth approaching that of fiber connections.

Fiber is the future of connectivity for schools.  By running a buried fiber to a school district, the ISP can gradually increase the circuit bandwidth as a school increases needs.  For many schools around the country that could include online testing mandates, flipped classrooms, and even remote learning via technologies like Telepresence.  Fiber runs from ISPs aren’t cheap.  They are so expensive right now that the majority of funding for the current year’s E-Rate is going to go to faster ISP connections under Priority 1 funding.  That leaves precious little money left over to fund Priority 2 equipment.  A former customer of mine spent the Priority 1 money to get a 10Gbit Internet circuit and then couldn’t afford a router to hook up to it because of the lack of money leftover for Priority 2.

The proposed E-Rate changes will hopefully fix some of those issues.  The changes call for  simplification of the rules regarding deployments that will hopefully drive new fiber construction.  I’m hoping this means that they will do away with the “dark fiber” rule that has been in place for so many years.  Previously, you could only run fiber between sites if it was lit on both ends and in use.  This discouraged the use of spare fiber, or dark fiber, because it couldn’t be claimed under E-Rate if it wasn’t passing traffic.  This has led to a large amount of ISP-owned circuits being used for managed WAN connections.  A very few schools that were on the cutting edge years ago managed to get dedicated point-to-point fiber runs.  In addition, the order calls for prioritizing funding for fiber deployments that will drive higher speeds and long-term efficiency.  This should enable schools to do away with running multimode fiber simply because it is cheap and instead give preferential treatment to single mode fiber that is capable of running gigabit and 10gig over long distances.  It should also be helpful to VARs that are poised to replace aging multimode fiber plants.

Classroom Mobility

WAN circuits aren’t the only technology that will benefit from these E-Rate changes.  The order calls for a focus on ensuring that schools and libraries gain access to high speed wireless networks for users.  This has a lot to do with the explosion of personal tablet and laptop devices as opposed to desktop labs.  When I first started working with schools more than a decade ago it was considered cutting edge to have a teacher computer and a student desktop in the classroom.  Today, tablet carts and one-to-one programs ensure that almost every student has access to some sort of device for research and learning.  That means that schools are going to need real enterprise wireless networks.  Sadly, many of them that either don’t qualify for E-Rate or can’t get enough funding settle for SMB/SOHO wireless devices that have been purchase for office supply stores simply because they are inexpensive.  It causes the IT admins to spend entirely too much time troubleshooting these connections and distracting them from other, more important issues. It think this focus on wireless will go a long way to helping alleviate connectivity issues for schools of all sizes.

Finally, the FCC has ordered that the document submission process be modernized to include electronic filing options and that older technologies be phased out of the program. This should lead to fewer mistakes in the filing process as well as more rapid decisions for appropriate technology responses.  No longer do schools need to concern themselves with whether or not they need directory assistance on their Priority 1 phone lines.  Instead, they can focus on their problem areas and get what they need quickly.  There is also talk of fixing the audit and appeals process as well as speeding the deployment of funds.  As anyone that has worked with E-Rate will attest, the bureaucracy surrounding the program is difficult for anyone but the most seasoned professionals.  Even the E-Rate wizards have problems from year to year figuring out when an application will be approved or whether or not an audit will take place.  Making these processes easier and more transparent will be good for everyone involved in the program.


Tom’s Take

I posted previously that the cloud would kill the E-Rate program as we know it.  It appears I was right from a certain point of view.  Mobility and the cloud have both caused the E-Rate program to be evaluated and overhauled to address the changes in technology that are now filtering into schools from the corporate sector.  Someone was finally paying attention and figured out that we need to address faster Internet circuits and wireless connectivity instead of DNS servers and more cabling for nonexistent desktops.  Taking these steps shows that there is still life left in the E-Rate program and its ability to help schools.  I still say that USAC needs to boost the funding considerably to help more schools all over the country.  I’m hoping that once the changes in the FCC order go through that more money will be poured into the program and our children can reap the benefits for years to come.

Disclaimer

I used to work for a VAR that did a great deal of E-Rate business.  I don’t work for them any longer.  This post is my work and does not reflect the opinion of any education VAR that I have talked to or have been previously affiliated with.  I say this because the Schools and Libraries Division (SLD) of USAC, which is the enforcement and auditing arm, can be a bit vindictive at times when it comes to criticism.  I don’t want anyone at my previous employer to suffer because I decided to speak my mind.