SDN Use Case: Content Filtering

Embed from Getty Images

K-12 schools face unique challenges with their IT infrastructure.  Their user base needs access to a large amount of information while at the same time facing restrictions.  While it does sound like some corporate network policies, the restrictions in the education environment are legal in nature.  Schools must find new ways to provide the assurance of restricting content without destroying their network in the process.  Which lead me to ask: Can SDN Help?

Online Protection

The government E-Rate program gives schools money each year under Priority 1 funding for Internet access.  Indeed, the whole point of the E-Rate program is to get schools connected to the Internet.  But we all know the Internet comes with a bevy of distractions. Many of those distractions are graphic in nature and must be eliminated in a school.  Because it’s the law.

The Children’s Internet Protection Act (CIPA) mandates that schools and libraries receiving E-Rate funding for high speed broadband Internet connections must filter those connections to remove questionable content.  Otherwise they risk losing funding for all E-Rate services.  That makes content filters very popular devices in schools, even if they aren’t funded by E-Rate (which they aren’t).

Content filters also cause network design issues.  In the old days, we had to put the content filter servers on a hub along with the outbound Internet router in order to insure they could see all the traffic and block the bad bits.  That became increasing difficult as network switch speeds increased.  Forcing hundreds of megabits through a 10Mbit hub was counterproductive.  Moving to switchport mirroring did alleviate the speed issues, but still caused network design problems.  Now, content filters can run on firewalls and bastion host devices or are enabled via proxy settings in the cloud.  But we all know that running too many services on a firewall causes performance issues.  Or leads to buying a larger firewall than needed.

Another issue that has crept up as of late is the use of Virtual Private Networking (VPN) as a way to defeat the content filter.  Setting up an SSL VPN to an outside, non-filtered device is pretty easy for a knowledgeable person.  And if that fails, there are plenty of services out there dedicated to defeating content filtering.  While the aim of these service is noble, such as bypassing the Great Firewall of China or the mandated Internet filtering in the UK, they can also be used to bypass the CIPA-mandated filtering in schools as well.  It’s a high-tech game of cat-and-mouse.  Blocking access to one VPN only for three more to pop up to replace it.

Software Defined Protection

So how can SDN help?  Service chaining allows traffic to be directed to a given device or virtual appliance before being passed on through the network.  This great presentation from Networking Field Day 7 presenter Tail-f Networks shows how service chaining can force traffic through security devices like IDS/IPS and through content filters as well.  There is no need to add hubs or mirrored switch ports in your network.  There is also no need to configure traffic to transit the same outbound router or firewall, thereby creating a single point of failure.  Thanks to the magic of SDN, the packets go to the filter automatically.  That’s because they don’t really have a choice.

It also works well for providers wanting to offer filtering as a service to schools.  This allows a provider to configure the edge network to force traffic to a large central content filter cluster and ensure delivery.  It also allows the service provider network to operate without impact to non-filtered customers.  That’s very useful even in ISPs dedicated to education institutions, as the filter provisions for K-12 schools don’t apply to higher education facilities, like colleges and universities.  Service chaining would allow the college to stay free and clear while the high schools are cleansed of inappropriate content.

The VPN issue is a thorny one for sure.  How do you classify traffic that is trying to hide from you?  Even services like Netflix are having trouble blocking VPN usage and they stand to lose millions if they can’t.  How can SDN help in this situation? We could build policies to drop traffic headed for known VPN endpoints.  That should take care of the services that make it easy to configure and serve as a proxy point.  But what about those tech-savvy kids that setup SSL VPNs back home?

Luckily, SDN can help there as well.  Many unified threat management appliances offer the ability to intercept SSL conversations.  This is an outgrowth of sites like Facebook defaulting to SSL to increase security.  SSL intercept essentially acts as a man-in-the-middle attack.  The firewall decrypts the SSL conversation, scans the packets, and re-encrypts it using a different certificate.  When the packets come back in, the process is reversed.  This SSL intercept capability would allow those SSL VPN packets to be dropped when detected.  The SDN component ensures that HTTPS traffic is always redirected to a device that and do SSL intercept, rather than taking a path through the network that might lead to a different exit point.

Tom’s Take

Content filtering isn’t fun.  I’ve always said that I don’t envy the jobs of people that have to wade through the unsavory parts of the Internet to categorize bits as appropriate or not.  It’s also a pain for network engineers that need to keep redesigning the networking and introducing points of failure to meet federal guidelines for decency.  SDN holds the promise of making that easier.  In the above Tail-f example, the slide deck shows a UI that allows simple blocking of common protocols like Skype.  This could be extended to schools where student computers and wireless networks are identified and bad programs are disallowed while web traffic is pushed to a filter and scrubbed before heading out to the Wild Wild Web.  SDN can’t solve every problem we might have, but if it can make the mundane and time consuming problems easier, it might just give people the breathing room they need to work on the bigger issues.

An Educational SDN Use Case

During the VMUnderground Networking Panel, we had a great discussion about software defined networking (SDN) among other topics. Seems that SDN is a big unknown for many out there. One of the reasons for this is the lack of specific applications of the technology. OSPF and SQL are things that solve problems. Can the same be said of SDN? One specific question regarded how to use SDN in small-to-medium enterprise shops. I fired off an answer from my own experience:

Since then, I’ve had a few people using my example with regards to a great use case for SDN. I decided that I needed to develop it a bit more now that I’ve had time to think about it.

Schools are a great example of the kinds of “do more with less” organizations that are becoming more common. They have enterprise-class networks and needs and live off budgets that wouldn’t buy janitorial supplies. In fact, if it weren’t for E-Rate, most schools would have technology from the Stone Age. But all this new tech doesn’t help if you can’t find a way for it to be used to the fullest for the purposes of educating students.

In my example, I talked about the shift from written standardized testing to online assessments. Oklahoma and Indiana are leading the way in getting rid of Scantrons and #2 pencils in favor of keyboards and monitors. The process works well for the most part with proper planning. My old job saw lots of scrambling to prep laptops, tablets, and lab machines for the rigors of running the test. But no amount of pre-config could prepare for the day when it was time to go live. On those days, the network was squarely in the sights of the administration.

I’ve seen emails go around banning non-testing students from the computers. I’ve seen hard-coded DNS entries on testing machines while the rest of the school had DNS taken offline to keep them from surfing the web. Redundant circuits. QoS policies that would make voice engineers cry. All in the hope of keeping the online test bandwidth free to get things completed in the testing window. All the while, I was thinking to myself, “There has got to be an easier way to do this…”

Redefining with Software

Enter SDN. The original use case for SDN at Stanford was network slicing. The Next-Gen Network Team wanted to use the spare capacity of the network for testing without crashing the whole system. Being able to reconfigure the network on the fly is a huge leap forward. Pushing policy into devices without CLI cuts down on the resume-generating events (RGE) in production equipment. So how can we apply these network slicing principles to my example?

On the day of the test, have the configuration system push down a new policy that gives the testing machines a guaranteed amount of bandwidth. This reservation will ensure each machine is able to get what it needs without being starved out. With SDN, we can set this policy on a per-IP basis to ensure it is enforced. This slice will exist separate from the production network to ensure that no one starting a huge FTP transfer or video upload will disrupt testing. By leaving the remaining bandwidth intact for the rest of the school’s production network administrators can ensure that the rest of the student body isn’t impacted during testing. With the move toward flipped classrooms and online curriculum augmentation, having available bandwidth is crucial.

Could this be done via non-SDN means? Sure. Granted, you’ll have to plan the QoS policy ahead of time and find a way to classify your end-user workstations properly. You’ll also have to tune things to make sure no one is dominating the test machine pool. And you have to get it right on every switch you program. And remove it when you’re done. Unless you missed a student or a window, in which case you’ll need to reprogram everything again. SDN certainly makes this process much easier and less disruptive.


Tom’s Take

SDN isn’t a product. You can’t order a part number for SDN-001 and get a box labeled SDN. Instead, it’s a process. You apply SDN to existing environment and extend the capabilities through new processes. Those processes need use cases. Use cases drive business cases. Business cases provide buy in from the stakeholders. That’s why discussing cases like the one above are so important. When you can find a use for SDN, you can get people to accept it. And that’s half the battle.

CCNA Data Center on vBrownBag

vbrownbagSometimes when I’m writing blog posts, I forget how important it is to start off on the right foot.  For a lot of networking people just starting out, discussions about advanced SDN topics and new theories can seem overwhelming when you’re trying to figure out things like subnetting or even what a switch really is.  While I don’t write about entry level topics often, I had the good fortune recently to talk about them on the vBrownBag podcast.

For those that may not be familiar, vBrownBag is a great series that goes into depth about a number of technology topics.  Historically, vBrownBag has been focused on virtualization topics.  Now, with the advent of virtual networking become more integrated into virtualization the vBrownBag organizers asked me if I’d be willing to jump on and talk about the CCNA Data Center.  Of course I took the opportunity to lend my voice to what will hopefully be the start of some promising data center networking careers.

These are the two videos I recorded.  The vBrownBag is usually a one-hour show.  I somehow managed to go an hour and half on both.  I realized there is just so much knowledge that goes into these certifications that I couldn’t do it all even if I had six hours.

Also, in the midst of my preparation, I found a few resources that I wanted to share with the community for them to get the most out of the experience.

Chris Wahl’s CCNA DC course from PluralSight – This is worth the time and investment for sure.  It covers DCICN in good depth, and his work with NX-OS is very handy if you’ve never seen it before.

Todd Lamle’s NX-OS Simulator – If you can’t get rack time on a real Nexus, this is pretty close to the real thing.  You should check it out even if only to get familiar with the NX-OS CLI.

NX-OS and Nexus Switching, 2nd Edition – This is more for post-grad work.  Ron Fuller (@CCIE5851) helped write the definitive guide to NX-OS.  If you are going to work on Nexus gear, you need a copy of this handy. Be sure to use the code “NETNERD” to get it for 30% off!


 

Tom’s Take

Never forget where you started.  The advanced topics we discuss take a lot for granted in the basic knowledge department.  Always be sure to give a little back to the community in that regard.  The network engineer you help shepherd today may end up being the one that saves your job in the future.  Take the time to show people the ropes.  Otherwise you’ll end up hanging yourself.

SDN 101 at ONUG Academy

300x250_TFD10_V2

Software defined networking is king of the hill these days in the greater networking world.  Vendors are contemplating strategies.  Users are demanding functionality.  And engineers are trying to figure out what it all means.  What’s needed is a way for vendor-neutral parties to get together and talk about what SDN represents and how best to implement it.  Most of the talk so far has been at vendor-specific conferences like Cisco Live or at other conferences like Interop.  I think a third option has just presented itself.

Nick Lippis (@NickLippis) has put together a group of SDN-focused people to address concerns about implementation and usage.  The Open Networking User Group (ONUG) was assembled to allow large companies using SDN to have a semi-annual meeting to discuss strategy and results.  It allows Facebook to talk to JP Morgan about what they are doing to simplify networking through use of things like OpenFlow.

This year, ONUG is taking it a step further by putting on the ONUG Academy, a day-long look at SDN through the eyes of those that implement it.  They have assembled a group of amazing people, including the founder of Cumulus Networks and Tech Field Day’s own Brent Salisbury (@NetworkStatic).  There will be classes about optimizing networks for SDN as well as writing SDN applications for the most popular controllers on the market.  Nick shares more details about the ONUG academy here:

If you’re interested in attending ONUG either for the academy or for the customer-focused meetings, you need to register today.  As a special bonus, if you use the code TFD10 when you sign up, you can take 10% of the cost of registration.  Use that extra cash to go out and buy a cannoli or two.

I’ll be at ONUG with Tech Field Day interviewing customers and attendees about their SDN strategies as well as where they think the state of the industry is headed.  If you’re there, stop by and say hello.  And be sure to bring me one of those cannolis.

Know the Process, Not the Tool

rj45process

If there is one thing that amuses me as of late, it’s the “death of CLI” talk that I’m starting to see coming from many proponents of software defined networking. They like to talk about programmatic APIs and GUI-based provisioning and how everything that network engineers have learned is going to fall by the wayside.  Like this Network World article. I think reports of the death of CLI are a bit exaggerated.

Firstly, the CLI will never go away. I learned this when I stared working with an Aerohive access point I got at Wireless Field Day 2. I already had a HiveManager account provisioned thanks to Devin Akin (@DevinAkin), so all I needed to do was add the device to my account and I would be good to go. Except it never showed up. I could see it on my local network, but it never showed up in the online database. I rebooted and reset several times before flipping the device over and finding a curious port labeled “CONSOLE”. Why would a cloud-based device need a console port. In the next hour, I learned a lot about the way Aerohive APs are provisioned and how there were just some commands that I couldn’t enter in the GUI that helped me narrow down the problem. After fixing a provisioning glitch in HiveManager the next day, I was ready to go. The CLI didn’t fix my problem, but I did learn quite a bit from it.

Basic interfaces give people a great way to see what’s going on under the hood. Given that most folks in networking are from the mold of “take it apart to see why it works” the CLI is great for them. I agree that memorizing a 10-argument command to configure something like route redistribution is a pain in the neck, but that doesn’t come from the difficulty of networking. Instead, the difficulty lies in speaking the language.

I’ve traveled to a foreign country once or twice in my life. I barely have a grasp of the English language at times. I can usually figure out some Spanish. My foreign language skills have pretty much left me at this point. However, when I want to make myself understood to people that speak another language, I don’t focus on syntax. Instead, I focus on ideas. Pointing at an object and making gestures for money usually gets the point across that I want to buy something. Pantomiming a drinking gesture will get me to a restaurant.

Networking is no different. When I started trying to learn CLI terminology for Brocade, Arista, and HP I found they were similar in some respects but very different in others. When you try to take your Cisco CLI skills to a Juniper router, you’ll find that you aren’t even in the neighborhood when it comes to syntax. What becomes important is *what* you’re trying to do. If you can think through what you’re trying to accomplish, there’s usually a help file or a Google search that can pull up the right way to do things.

This extends its way into a GUI/API-driven programming interface as well. Rather than trying to intuit the interface just think about what you want to do instead. If you want two hosts to talk to each other through a low-cost link with basic security you just have to figure out what the drag-and-drop is for that. If you want to force application-specific traffic to transit a host running an intrusion prevention system you already know what you want to do. It’s just a matter of find the right combination of interface programming to accomplish it. If you’re working on an API call using Python or Java you probably have to define the constraints of the system anyway. The hard part is writing the code to interface to accomplish the task.


Tom’s Take

Learning the process is the key to making it in networking. So many entry level folks are worried about *how* to do something. Configuring a route or provisioning a VLAN are the end goal. It’s only when those folks take a step back and think about their task without the commands that they begin to become real engineers. When you can visualize what you want to do without thinking about the commands you need to enter to do it, you are taking the logical step beyond being tied to a platform. Some of the smartest people I know break a task down into component parts and steps. When you spend more time on *what* you are doing and less on *how* you are doing it, you don’t need to concern yourself with radical shifts in networking, whether they be SDN, NFV, or the next big thing. Because the process will never change even if the tools might.

Objective Lessons

PipeHammer

“Experience is a harsh teacher because it gives the test first and the lesson afterwards.” – Vernon Law

When I was in college, I spent a summer working for my father.  He works in the construction business as a superintendent.  I agreed to help him out in exchange for a year’s tuition.  In return, I got exposure to all kinds of fun methods of digging ditches and pouring concrete.  One story that sticks out in my mind over and over taught me the value of the object lesson.

One of the carpenters that worked for my father had a really bad habit of breaking sledgehammer handles.  When he was driving stakes for concrete forms, he never failed to miss the head of the 2×4 by an inch and catch the top of the handle on it instead.  The force of the swing usually caused the head to break off after two or three misses.  After the fourth or fifth broken handle, my father finally had enough.  He took an old sledgehammer head and welded a steel pipe to it to serve as a handle.  When the carpenter brought him his broken hammer yet again, my father handed him the new steel-handle hammer and said, “This is your new tool.  I don’t want to see you using any hammer but this one.”  Sure enough, the carpenter started driving the 2×4 form stakes again.  Only this time when he missed his target, the steel handle didn’t offer the same resistance as the wooden one.  The shock of the vibration caused the carpenter to drop the hammer and shake his hand in a combination of frustration and pain.  When he picked up the hammer again, he made sure to measure his stance and swing to ensure he didn’t miss a second time.  By the end of the summer, he was an expert sledgehammer swinger.

Amusing as it may be, this story does have a purpose.  People need to learn from failure.  For some, the lesson needs to be a bit more direct.  My father’s carpenter had likely been breaking hammer handles his entire life.  Only when confronted with a more resilient handle did he learn to adjust his processes and fix the real issue – his aim.  In technology, we often find that incorrect methods are as much to blame for problems as bad hardware or buggy software.

Thanks to object lessons, I’ve learned to never bridge the two terminals of an analog 66-block connection with a metal screwdriver lest I get a shocking reward.  I’ve watched others try to rack fully populated chassis switches by brute force alone.  And we won’t talk about the time I watched a technician rewire a 220 volt UPS receptacle without turning off the breaker (he lived).  Each time, I knew I needed to step in at some point to prevent physical harm to the person or prevent destruction of the equipment.  But for these folks, the lesson could only be learned after the mistake had been made.  I think this recent tweet from Teren Bryson (@SomeClown) sums it up nicely:

Some people don’t listen to advice.  That’s a fact born out over years and years of working in the industry.  They know that their way is better or more appropriate even against the advice of multiple experts with decades of experience.  For those people that can’t be told anything, a lesson in reality usually serves as the best instructor.  The key is not to immediately jump to the I Told You So mentality afterward.  It is far too easy to watch someone create a bridging loop against your advice and crash a network only to walk up to them and gloat a little about how you knew better.  Instead of stroking your own ego against an embarrassed and potentially worried co-worker, instead take the time to discuss with them why things happened the way they did and coach them to not make the same mistakes again.  Make them learn from their lesson rather than covering it up and making the same mistake again.


Tom’s Take

I’ve screwed up before.  Whether it was deleting mailboxes or creating a routing loop I think I’ve done my fair share of failing.  Object lessons are important because they quickly show the result of failure and give people a chance to learn from it.  You naturally feel embarrassed and upset when it happens.  So long as you gather your thoughts and channel all that frustration into learning from your mistake then things will work out.  It’s only the people that ignore the lesson or assume that the mistake was a one-time occurrence that will continually subject themselves to object lessons.  And those lessons will eventually hit home with the force of a sledgehammer.

Is It Time To Remove the VCP Class Requirement?

While I was at VMware Partner Exchange, I attended a keynote address. This in and of itself isn’t a big deal. However, one of the bullet points that came up in the keynote slide deck gave me a bit of pause. VMware is chaging some of their VSP and VTSP certifications to be more personal and direct. Being a VCP, this didn’t really impact me a whole lot. But I thought it might be time to tweet out one of my oft-requested changes to the certification program:

Oops. I started getting flooding with mentions. Many were behind me. Still others were vehemently opposed to any changes. They said that dropping the class requirement would devalue the certification. I responded as best I could in many of these cases, but the reply list soon outgrew the words I wanted to write down. After speaking with some people, both officially and unofficially, I figured it was due time I wrote a blog post to cover my thoughts on the matter.

When I took the VMware What’s New class for vSphere 5, I mentioned therein that I thought the requirement for taking a $3,000US class for a $225 test was a bit silly. I myself took and passed the test based on my experience well before I sat the class. Because my previous VCP was on VMware ESX 3 and not on ESX 4, I still had to sit in the What’s New course before my passing score would be accepted. To this day I still consider that a silly requirement.

I now think I understand why VMware does this. Much of the What’s New and Install, Configure, and Manage (ICM) classes are hands-on lab work. VMware has gone to great lengths to build out the infrastructure necessary to allow students to spend their time practicing the lab exercises in the courses. These labs rival all but the CCIE practice lab pods that I’ve seen. That makes the course very useful to all levels of students. The introductory people that have never really touched VMware get to experience it for real instead of just looking at screenshots in a slide deck. The more experienced users that are sitting the class for certification or perhaps to refresh knowledge get to play around on a live system and polish skills.

The problem comes that investment in lab equipment is expensive. When the CCIE Data Center lab specs were released, Jeff Fry calculated the list price of all the proposed equipment and it was staggering. Now think about doing that yourself. With VMware, you’re going to need a robust server and some software. Trial versions can be used to some degree, but to truly practice advanced features (like storage vMotion or tiering) you’re going to need a full setup. That’s a bit out of reach for most users. VMware addressed this issue by creating their own labs. The user gets access to the labs for the cost of the ICM or What’s New class.

How is VMware recovering the costs of the labs? By charging for the course. Yes, training classes aren’t cheap. You have to rent a room and pay for expenses for your instructor and even catering and food depending on the training center. But $3,000US is a bit much for ICM and What’s New. VMware is using those classes to recover the costs of the lab development and operation. In order to be sure that the costs are recovered in the most timely manner, the metrics need to make sense for class attendance. Given the chance, many test takers won’t go to the training class. They’d rather study from online material like the PDFs on VMware’s site or use less expensive training options like TrainSignal. Faced with the possiblity that students may elect to forego the expensive labs, VMware did what they had to so to ensure the labs would get used, and therefore the metrics worked out in their favor – they required the course (and labs) in order to be certified.

For those that say that not taking the class devalues the cert, ask yourself one question. Why does VMware only require the class for new VCPs? Why are VCPs in good standing allowed to take the test with no class requirement and get certified on a new version? If all the value is in the class, then all VCPs should be required to take a What’s New class before they can get upgraded. If the value is truly in the class, no one should be exempt from taking it. For most VCPs, this is not a pleasant thought. Many that I talked to said, “But I’ve already paid to go to the class. Why should I pay again?” This just speaks to my point that the value isn’t in the class, it’s in the knowledge. Besides VMware Education, who cares where people acquire the knowledge and experience? Isn’t a home lab just as good as the ones that VMware built.

Thanks to some awesome posts from people like Nick Marus and his guide to building an ESXi cluster on a Mac Mini, we can now acquire a small lab for very little out-of-pocket. It won’t be enough to test everything, but it should be enough to cover a lot of situations. What VMware needs to do is offer an alternate certification requirement that takes a home lab into account. While there may be ways to game the system, you could require a VMware employee or certified instructor or VCP to sign off on the lab equipment before it will be blessed for the alternate requirement. That should keep it above board for those that want to avoid the class and build their own lab for testing.

The other option would be to offer a more “entry level” certification with a less expensive class requirement that would allow people to get their foot in the door without breaking the bank. Most people see the VCP as the first step in getting VMware certified. Many VMware rock stars can’t get employed in larger companies because they aren’t VCPs. But they can’t get their VCP because they either can’t pay for the course or their employer won’t pay for it. Maybe by introducing a VMware Certified Administration (VCA) certification and class with a smaller barrier to entry, like a course in the $800-$1000US range, VMware can get a lot of entry level people on board with VMware. Then, make the VCA an alternate requirement for becoming a VCP. If the student has already shown the dedication to getting their VCA, VMware won’t need to recoup the costs from them.


Tom’s Take

It’s time to end the VCP class requirement in one form or another. I can name five people off the top of my head that are much better at VMware server administration than I am that don’t have a VCP. I have mine, but only because I convinced my boss to pay for the course. Even when I took the What’s New course to upgrade to a VCP5, I had to pull teeth to get into the last course before the deadline. Employers don’t see the return on investment for a $3,000US class, especially if the person that they are going to send already has the knowledge shared in the class. That barrier to entry is causing VMware to lose out on the visbility that having a lot of VCPs can bring. One can only hope that Microsoft and Citrix don’t beat VMware to the punch by offering low-cost training or alternate certification paths. For those just learning or wanting to take a less expensive route, having a Hyper-V certification in a world of commoditized hypervisors would fit the bill nicely. After that, the reasons for sticking with VMware become less and less important.

Cloud and the Death of E-Rate

Seems today you can’t throw a rock with hitting someone talking about the cloud.  There’s cloud in everything from the data center to my phone to my TV.  With all this cloud talk, you’d be pretty safe to say that cloud has its detractors.  There’s worry about data storage and password security.  There are fears that cloud will cause massive layoffs in IT.  However, I’m here to take a slightly different road with cloud.  I want to talk about how cloud is poised to harm your children’s education and bankrupt one the most important technology advantage programs ever.

Go find your most recent phone bill.  It doesn’t matter whether it’s a landline phone or a cell phone bill.  Now, flip to the last page.  You should see a minor line item labeled “Federal Universal Service Fee”.  Just like all other miscellaneous fees, this one goes mostly unnoticed, especially since it’s required on all phone numbers.  All that money that you pay into the Universal Service Fund is administered by the Universal Service Administrative Company (USAC), a division of the FCC.  USF has four divisions, one of which is the Schools and Libraries Division (SLD).  This portion of the program has a more commonly used name – E-Rate.  E-Rate helps schools and libraries all over the country obtain telecommunications and Internet access.  It accomplishes this by providing a fund that qualifying schools can draw from to help pay for a portion of their services.  Schools can be classified in a range of discount percentages, ranging from as low as 20% all the way up to 90% discount rates.  Those schools only have to pay $.10 on the dollar for their telecommunications services.  Those schools also happen to be the ones most in need of assistance, usually because of things such as rural location or other funding challenges.

E-Rate is divided into two separate pieces – Priority One and Priority Two.  Priority One is for telecommunications service and Internet access.  Priority One pays for phone service for the school and the pipeline to get them on the Internet.  The general rule for Priority One is that it is service-based only.  There usually isn’t any equipment provided by Priority One – at least not equipment owned by the customer.  Back in 1997, the first year of E-Rate, a T1 was considered a very fast Internet Circuit.  Today, most schools are moving past 10Mbit Ethernet circuits and looking to 100Mbit and beyond to satisfy voracious Internet users.  All Priority One requests must be fulfilled before Priority Two requests will begin to be funded.  One USAC starts funding Priority Two, they start at the 90% discount percentage and begin funding requests until the $2.25 billion allocated each year to the program is exhausted.  Priority Two covers Internal Connections and basic maintenance on those connections.  This is where the equipment comes in.  You can request routers, switches, wireless APs, Ethernet cabling, and even servers (provided they meet the requirements of providing some form of Internet access, like e-mail or web servers).  You can’t request PCs or phone handsets.  You can only ask for approved infrastructure pieces.  The idea is that Priority Two facilitates connectivity to Priority One services.  Priority Two allocations vary every year.  Some years they never fund past the 90% mark.  Two years ago, they funded all applicants.  It all depends on how much money is left over after all Priority One requests are satisfied.  There are rules in place to prevent schools from asking for new equipment every year to keep things fair.  Schools can only ask for internal connections two out of any five given years (the 2-of-5 rule).  In the other three years, they must ask for maintenance of that equipment.

There has always been a tug-of-war between what things should be covered under Priority One and Priority Two.  As I said, the general rule is that Priority One is for services only – no equipment.  One of the first things that was discussed was web hosting.  Web servers are covered under Priority Two.  A few years ago, some web hosting providers were able to get their services listed under Priority One.  That meant that schools didn’t have to apply to have their web servers installed under Priority Two.  They could just pay someone to host their website under Priority One and be done with it.  No extra money needed.  This was a real boon for those schools with lower discount percentages.  They didn’t have to hope that USAC would fund down into the 70s or the 60s.  Instead, they could have their website hosted under Priority One with no questions asked.  Remember, Priority One is always funded before Priority Two is even considered.  This fact has lead to many people attempting to get qualifying services setup under Priority One.  E-mail hosting and voice over IP (VoIP) are two that immediately spring to mind.  E-mail hosting goes without saying.  Priority One VoIP is new to the current E-Rate year (Year 15) as an eligible service.  The general idea is that a school can use a VoIP system hosted at a central location from a provider and have it covered as a Priority One service.  This still doesn’t cover handsets for the users, as those are never eligible.  It also doesn’t cover a local voice gateway, something that is very crucial for schools that want to maintain a backup just in case their VoIP connectivity goes down.  However, it does allow the school to have a VoIP phone system funded every year as opposed to hoping that E-Rate will fund low enough to cover it this year.

While I agree that access to more services is a good thing overall, I think we’re starting to see a slippery slope that will lead to trouble very soon.  ISPs and providers are scrambling to get anything and everything they can listed as a Priority One service.  Why stop at phones?  Why not have eligible servers hosted on a cloud platform?  Outsource all the applications you can to a data center far, far away.  If you can get your web, e-mail, and phone systems hosted in the cloud, what’s left to place on site in your school? Basic connectivity to those services, perhaps.  We still need switches and routers and access points to enable our connectivity to those far away services.  Except…the money.  Since Priority One always gets funded, everything that gets shoveled into Priority One takes money that could be used in Priority Two for infrastructure.  Schools that may never get funded at 25% will have their e-mail hosting paid for, while a 90% school that could really use APs to connect a mobile lab may get left out even though they have a critical need.  Making things Priority One just for the sake of getting them funded doesn’t really help when the budget for the program is capped from the beginning.  It’s already happening this year.  E-Rate Year 15 will only fund down to 90% for Priority Two.  That’s only because there was a carry over from last year.  Otherwise, USAC was seriously considering not funding Priority Two at all this year.  No internal connections.  No basic maintenance.  Find your own way schools.  Priority One is eating up the fund with all the new cloud services being considered, let alone with the huge increase in faster Internet circuits needed to access all these cloud services.  Network World recently had a report saying that schools need 100Mbps circuits.  Guess where the money to pay for those upgrades is going to come from?  Yep, E-Rate Priority One.  At least, until the money runs out because server hosting is a qualifying service this year.

Most of the schools that get E-Rate funding for Priority Two wouldn’t be able to pay for infrastructure services otherwise.  Unlike large school districts, these in-need schools may be forced to choose between adding a switch to connect a lab and adding another AP to cover a classroom.  Every penny counts, even when you consider they may only be paying 10-12% of the price in the first place.  If Priority One services eat up all the funding before we get to Priority Two, it may not matter a whole lot to those 90% schools.  They may not have the infrastructure in place to access the cloud.  Instead, they’ll limp along with a T1 or a 10Mbps circuit, hoping that one day Priority Two might get funded again.

How do we fix this before cloud becomes the death mask for E-Rate?  We have to ensure that USAC knows that hosting services need to be considered separately from Priority One.  I’m not quite sure how that needs to happen, whether it needs to be a section under Priority Two or if it needs to be something more like Priority One And A Half.  But lumping hosted VoIP in with Internet access simply because there is no on-site equipment isn’t the right solution.  Since a large majority of the schools that qualify for E-Rate are lower elementary schools, it makes sense that they have the best access to the Internet possible, along with good intra-site connectivity.  A gigabit Internet circuit doesn’t amount to much if you are still running on 10Mbps hubs (don’t laugh, it’s happened).  If USAC can’t be convinced that hosted services need to be separated from other Priority One access, maybe it’s time to look at raising the E-Rate cap.  Every year, the amount of requests for E-Rate is more than triple the funding commitment.  That’s a lot of paperwork.  The $2.25 billion allocation set forth in 1997 may have been a lot back then, but looking at the number of schools applying today, it’s just a drop in the bucket.  E-Rate isn’t the only component of USF, and any kind of increase in funding will likely come from an increase in the USF fees that everyone pays.  That’s akin to raising taxes, which is always a hot button issue.  The program itself has even come under fire both in the past and in recent years due to mismanagement and fraud.  I don’t have any concrete answers on how to fix this problem, but I sincerely hope that bringing it to light helps shed some light on the way that schools get their technology needs addressed.  I also hope that it makes people take a hard look at the cloud services being proposed for inclusion in E-Rate and think twice about taking an extra bucket of water from the well.  After all, the well will run dry sooner or later.  Then everyone goes thirsty.

Disclaimer

I am employed by a VAR that focuses on providing Priority Two E-Rate services for schools.  The analysis and opinions expressed in this article do not represent the position of my employer and are my thoughts and conclusions alone.

Mental Case – In a Flash(card)

You’ve probably noticed that I spend a lot of my time studying for things.  Seems like I’ve always been reading things or memorizing arcane formulae for one reason or another.  In the past, I have relied upon a large number of methods for this purpose.  However, I keep coming back to the tried-and-true flash card.  To me, it’s the most basic form of learning.  A question on the front and an answer on the back is all you need to drill a fact into your head.  As I started studying for my CCIE lab exam, this was the route that I chose to go down when I wanted to learn some of the more difficult features, like BGP supress maps or NTP peer configurations.  It was a pain to hand write all that info out on my cards.  Sometimes it didn’t all fit.  Other times, I couldn’t read my own writing.  I wondered if there was a better solution.

Cue my friend Greg Ferro and his post about a program called Mental Case.  Mental Case, from Mental Faculty, is a program designed to let you create your own flashcards.  The main program runs on a Mac computer and allows you to create libraries of flash cards.  There are a lot of good example sets when you first launch the app for things like languages.  But, as you go through some of the other examples, you can see the power that Mental Case can give you above and beyond a simple 3″x5″ flash card.  For one thing, you can use pictures in your flash cards.  This is handy if you are trying to learn about art or landmarks, for instance.  You could also use it as a quick quiz about Cisco Visio shapes or wireless antenna types.  This is a great way to study things more advanced than just simple text.

Once you dig into Mental Case, though, you can see some of the things that separate it from traditional pen-and-paper.  While it might be handy to have a few flash cards in your pocket to take out and study when you’re in line at the DMV, more often than not you tend to forget about them.  Mental Case can setup a schedule for you to study.  It will pop up and tell you that it’s time to do some work.  That’s great as a constant reminder of what you need to learn.  Another nice feature is the learning feature.  If you have ever used flash cards, you probably know that after a while, you tend to know about 80% of them cold with little effort.  However, there are about 20% that kind of float in the middle of the pack and just get skipped past without much reinforcement.  They kind of get lost in the shuffle, so to speak.  With Mental Case, those questions which you get wrong more often get shuffled to the front, where your attention span is more focused.  Mental Case learns the best ways to make you learn best.  You can also set Mental Case to shuffle or even reverse the card deck to keep you on your toes.

When you couple all of these features with the fact that there is a Mental Case IOS client as well as a desktop version, your study efficiency goes through the roof.  Now, rather than only being able to study your flash cards when you are at your desk, you can take them with you everywhere.  When you consider that most people today spend an awful lot of time staring at their iPhones and iPads, it’s nice to know that you can pull up a set of flash cards from your mobile device and go to town at a moment’s notice, like in the line at the DMV.  In fact, that’s how I got started with Mental Case.  I downloaded the IOS app and started firing out the flash cards for things like changing RIP timers and configuring SSM.  However, the main Mental Case app only runs on Mac.  At the time, I didn’t have a Mac?  How did I do it?  Well, Mental Case seems to have thought of everything.  While the IOS app works best in concert with the Mac app, you can also create flash cards on other sites, like FlashcardExchange and Quizzlet.  You can create decks and make them publicly available for everyone, or just share them among your friends.  You do have to make the deck public long enough to download to Mental Case IOS, but it can be protected again afterwards if you are studying information that shouldn’t be shared with the rest of the world.  Note, though, that the IOS version of the software is a little more basic than the one on the Mac.  It doesn’t support wacky text formatting or the ability to do multiple choice quizzes.  Also, cards that are created with more than two “sides” (Mental Case calls them facets) will only display properly in slideshow mode.  But, if you think of the IOS client as a replacement for the stack of 10,000 flash cards you might already be carrying in your backpack or pocket the limitations aren’t that severe after all.

The latest version of Mental Case now has the option to share content between Macs via iCloud.  This will allow you to keep your deck synced between your different computers.  You still have to sync the cards between your Mac and your IOS device via Wi-Fi.  You can share at shorter ranges over Bluetooth.  You can also create collection of cards known as a Study Archive and place them in a central location, like Dropbox for instance. This wasn’t a feature when I was using Mental Case full time, but I like the idea of being able to keep my cards in one place all the time.

Mental Case is running a special on their software for the next few days.  Normally, the Mac version costs $29.99.  That’s worth every penny if you spend time studying.  However, for the next few days, it’s only $9.99.  This is a steal for such a powerful study program.  The IOS app is also on sale.  Normally $4.99, it’s just $2.99.  Alone the IOS app is a great resource.  Paired with its bigger brother, this is a no-brainer.  Run out and grab these two programs and spend more time studying your facts and figures efficiently and less time creating them.  If you’d like to learn more about Mental Case from Mental Faculty, you can check out their webiste at http://www.mentalcaseapp.com.

Disclaimer

I am a Mental Case IOS user.  I have used the demo version of the Mental Case Mac app.  Mental Case has not contacted me about this review, and no promotional consideration was given.  I’m just a really big fan of the app and wanted to tell people about it.

Networking Is Not Trivia(l)

Fun fact: my friends and family have banned me from playing Trivial Pursuit.  I played the Genus 4 edition in college so much that I practically memorized the card deck.  I can’t play the Star Wars version or any other licensed set.  I chalk a lot of this up to the fact that my mind seems to be wired for trivia.  For whatever reason, pointless facts stick in my head like glue.  I knew what an aglet was before Phinneas & Ferb.  My head is filled with random statistics and anecdotes about subjects no one cares about.  I’ve been accused in the past of reading encyclopedias in my spare time.  Amusingly enough, I do tend to consume articles on Wikipedia quite often.  All of this lead me to picking a career in computers.

Information Technology is filled with all kinds of interesting trivia.  Whether it’s knowing that Admiral Grace Hopper coined the term “bug” or remembering that the default OSPF reference bandwidth is 100 Mb, there are thousands of informational nuggets laying around, waiting to be discovered and cataloged away for a rainy day.  With my love of learning mindless minutia, it comes as no surprise that I tend to devour all kinds of information related to computing.  After a while I started to realize that simply amassing all of this information doesn’t do any good for anyone.  Simply remembering that EIGRP bandwidth values are multiplied by 256 doesn’t do any good without a bigger picture of realizing it’s for backwards compatibility with IGRP.  The individual facts themselves are useless without context and application.

I tried to learn how to play the guitar many years ago.  I went out and got a starter acoustic guitar and a book of chords and spent many diligent hours practicing the proper fingering to make something other than noise.  I was getting fairly good at producing chords without a second thought.  It kind of started falling apart when I tried to play my first song, though.  While I was good at making the individual notes, when it came time to string them together into something that sounded like a song I wasn’t quite up to snuff.  In much the same way, being an effective IT professional is more than just knowing a whole bunch of stuff.  It’s finding a way to take all that knowledge and apply it somehow.  You need to find a way to take all those little random bits of trivia and learn to apply them to problems to fix things efficiently.  People that depend on IT don’t really care what the multicast address for RIPv2 updates is.  What they want is a stable routing table when they have some sort of access list blocking traffic.  It’s up to us to make a song out of all the network chords we’ve learned.

It’s important to know all of those bits of trivia in the long run.  They come in handy for things like tests or cocktail party anecdotes.  However, you need to be sure to treat them like building blocks.  Take what you need to form a bigger picture.  You won’t become bogged down in the details of deciding what parts to implement based on sheer knowledge alone.  Instead, you can build a successful strategy.  Think of the idea of the gestalt – things are often greater than the sum of their parts.  That’s how you should look at IT-related facts.


Tom’s Take

I’m never going to stop learning trivia.  It’s as ingrained into my personality as snark and sarcasm.  However, if I’m going to find a way to make money off of all that trivia, I need to be sure to remember that factoids are useless without application.  I must always keep in mind that solutions are key to decision makers.  After all, the snark and sarcasm aren’t likely to amount to much of career.  At least not in networking.