CCNA Data Center on vBrownBag

vbrownbagSometimes when I’m writing blog posts, I forget how important it is to start off on the right foot.  For a lot of networking people just starting out, discussions about advanced SDN topics and new theories can seem overwhelming when you’re trying to figure out things like subnetting or even what a switch really is.  While I don’t write about entry level topics often, I had the good fortune recently to talk about them on the vBrownBag podcast.

For those that may not be familiar, vBrownBag is a great series that goes into depth about a number of technology topics.  Historically, vBrownBag has been focused on virtualization topics.  Now, with the advent of virtual networking become more integrated into virtualization the vBrownBag organizers asked me if I’d be willing to jump on and talk about the CCNA Data Center.  Of course I took the opportunity to lend my voice to what will hopefully be the start of some promising data center networking careers.

These are the two videos I recorded.  The vBrownBag is usually a one-hour show.  I somehow managed to go an hour and half on both.  I realized there is just so much knowledge that goes into these certifications that I couldn’t do it all even if I had six hours.

Also, in the midst of my preparation, I found a few resources that I wanted to share with the community for them to get the most out of the experience.

Chris Wahl’s CCNA DC course from PluralSight – This is worth the time and investment for sure.  It covers DCICN in good depth, and his work with NX-OS is very handy if you’ve never seen it before.

Todd Lamle’s NX-OS Simulator – If you can’t get rack time on a real Nexus, this is pretty close to the real thing.  You should check it out even if only to get familiar with the NX-OS CLI.

NX-OS and Nexus Switching, 2nd Edition – This is more for post-grad work.  Ron Fuller (@CCIE5851) helped write the definitive guide to NX-OS.  If you are going to work on Nexus gear, you need a copy of this handy. Be sure to use the code “NETNERD” to get it for 30% off!


Tom’s Take

Never forget where you started.  The advanced topics we discuss take a lot for granted in the basic knowledge department.  Always be sure to give a little back to the community in that regard.  The network engineer you help shepherd today may end up being the one that saves your job in the future.  Take the time to show people the ropes.  Otherwise you’ll end up hanging yourself.

SDN 101 at ONUG Academy


Software defined networking is king of the hill these days in the greater networking world.  Vendors are contemplating strategies.  Users are demanding functionality.  And engineers are trying to figure out what it all means.  What’s needed is a way for vendor-neutral parties to get together and talk about what SDN represents and how best to implement it.  Most of the talk so far has been at vendor-specific conferences like Cisco Live or at other conferences like Interop.  I think a third option has just presented itself.

Nick Lippis (@NickLippis) has put together a group of SDN-focused people to address concerns about implementation and usage.  The Open Networking User Group (ONUG) was assembled to allow large companies using SDN to have a semi-annual meeting to discuss strategy and results.  It allows Facebook to talk to JP Morgan about what they are doing to simplify networking through use of things like OpenFlow.

This year, ONUG is taking it a step further by putting on the ONUG Academy, a day-long look at SDN through the eyes of those that implement it.  They have assembled a group of amazing people, including the founder of Cumulus Networks and Tech Field Day’s own Brent Salisbury (@NetworkStatic).  There will be classes about optimizing networks for SDN as well as writing SDN applications for the most popular controllers on the market.  Nick shares more details about the ONUG academy here:

If you’re interested in attending ONUG either for the academy or for the customer-focused meetings, you need to register today.  As a special bonus, if you use the code TFD10 when you sign up, you can take 10% of the cost of registration.  Use that extra cash to go out and buy a cannoli or two.

I’ll be at ONUG with Tech Field Day interviewing customers and attendees about their SDN strategies as well as where they think the state of the industry is headed.  If you’re there, stop by and say hello.  And be sure to bring me one of those cannolis.

Know the Process, Not the Tool


If there is one thing that amuses me as of late, it’s the “death of CLI” talk that I’m starting to see coming from many proponents of software defined networking. They like to talk about programmatic APIs and GUI-based provisioning and how everything that network engineers have learned is going to fall by the wayside.  Like this Network World article. I think reports of the death of CLI are a bit exaggerated.

Firstly, the CLI will never go away. I learned this when I stared working with an Aerohive access point I got at Wireless Field Day 2. I already had a HiveManager account provisioned thanks to Devin Akin (@DevinAkin), so all I needed to do was add the device to my account and I would be good to go. Except it never showed up. I could see it on my local network, but it never showed up in the online database. I rebooted and reset several times before flipping the device over and finding a curious port labeled “CONSOLE”. Why would a cloud-based device need a console port. In the next hour, I learned a lot about the way Aerohive APs are provisioned and how there were just some commands that I couldn’t enter in the GUI that helped me narrow down the problem. After fixing a provisioning glitch in HiveManager the next day, I was ready to go. The CLI didn’t fix my problem, but I did learn quite a bit from it.

Basic interfaces give people a great way to see what’s going on under the hood. Given that most folks in networking are from the mold of “take it apart to see why it works” the CLI is great for them. I agree that memorizing a 10-argument command to configure something like route redistribution is a pain in the neck, but that doesn’t come from the difficulty of networking. Instead, the difficulty lies in speaking the language.

I’ve traveled to a foreign country once or twice in my life. I barely have a grasp of the English language at times. I can usually figure out some Spanish. My foreign language skills have pretty much left me at this point. However, when I want to make myself understood to people that speak another language, I don’t focus on syntax. Instead, I focus on ideas. Pointing at an object and making gestures for money usually gets the point across that I want to buy something. Pantomiming a drinking gesture will get me to a restaurant.

Networking is no different. When I started trying to learn CLI terminology for Brocade, Arista, and HP I found they were similar in some respects but very different in others. When you try to take your Cisco CLI skills to a Juniper router, you’ll find that you aren’t even in the neighborhood when it comes to syntax. What becomes important is *what* you’re trying to do. If you can think through what you’re trying to accomplish, there’s usually a help file or a Google search that can pull up the right way to do things.

This extends its way into a GUI/API-driven programming interface as well. Rather than trying to intuit the interface just think about what you want to do instead. If you want two hosts to talk to each other through a low-cost link with basic security you just have to figure out what the drag-and-drop is for that. If you want to force application-specific traffic to transit a host running an intrusion prevention system you already know what you want to do. It’s just a matter of find the right combination of interface programming to accomplish it. If you’re working on an API call using Python or Java you probably have to define the constraints of the system anyway. The hard part is writing the code to interface to accomplish the task.

Tom’s Take

Learning the process is the key to making it in networking. So many entry level folks are worried about *how* to do something. Configuring a route or provisioning a VLAN are the end goal. It’s only when those folks take a step back and think about their task without the commands that they begin to become real engineers. When you can visualize what you want to do without thinking about the commands you need to enter to do it, you are taking the logical step beyond being tied to a platform. Some of the smartest people I know break a task down into component parts and steps. When you spend more time on *what* you are doing and less on *how* you are doing it, you don’t need to concern yourself with radical shifts in networking, whether they be SDN, NFV, or the next big thing. Because the process will never change even if the tools might.

Objective Lessons


“Experience is a harsh teacher because it gives the test first and the lesson afterwards.” – Vernon Law

When I was in college, I spent a summer working for my father.  He works in the construction business as a superintendent.  I agreed to help him out in exchange for a year’s tuition.  In return, I got exposure to all kinds of fun methods of digging ditches and pouring concrete.  One story that sticks out in my mind over and over taught me the value of the object lesson.

One of the carpenters that worked for my father had a really bad habit of breaking sledgehammer handles.  When he was driving stakes for concrete forms, he never failed to miss the head of the 2×4 by an inch and catch the top of the handle on it instead.  The force of the swing usually caused the head to break off after two or three misses.  After the fourth or fifth broken handle, my father finally had enough.  He took an old sledgehammer head and welded a steel pipe to it to serve as a handle.  When the carpenter brought him his broken hammer yet again, my father handed him the new steel-handle hammer and said, “This is your new tool.  I don’t want to see you using any hammer but this one.”  Sure enough, the carpenter started driving the 2×4 form stakes again.  Only this time when he missed his target, the steel handle didn’t offer the same resistance as the wooden one.  The shock of the vibration caused the carpenter to drop the hammer and shake his hand in a combination of frustration and pain.  When he picked up the hammer again, he made sure to measure his stance and swing to ensure he didn’t miss a second time.  By the end of the summer, he was an expert sledgehammer swinger.

Amusing as it may be, this story does have a purpose.  People need to learn from failure.  For some, the lesson needs to be a bit more direct.  My father’s carpenter had likely been breaking hammer handles his entire life.  Only when confronted with a more resilient handle did he learn to adjust his processes and fix the real issue – his aim.  In technology, we often find that incorrect methods are as much to blame for problems as bad hardware or buggy software.

Thanks to object lessons, I’ve learned to never bridge the two terminals of an analog 66-block connection with a metal screwdriver lest I get a shocking reward.  I’ve watched others try to rack fully populated chassis switches by brute force alone.  And we won’t talk about the time I watched a technician rewire a 220 volt UPS receptacle without turning off the breaker (he lived).  Each time, I knew I needed to step in at some point to prevent physical harm to the person or prevent destruction of the equipment.  But for these folks, the lesson could only be learned after the mistake had been made.  I think this recent tweet from Teren Bryson (@SomeClown) sums it up nicely:

Some people don’t listen to advice.  That’s a fact born out over years and years of working in the industry.  They know that their way is better or more appropriate even against the advice of multiple experts with decades of experience.  For those people that can’t be told anything, a lesson in reality usually serves as the best instructor.  The key is not to immediately jump to the I Told You So mentality afterward.  It is far too easy to watch someone create a bridging loop against your advice and crash a network only to walk up to them and gloat a little about how you knew better.  Instead of stroking your own ego against an embarrassed and potentially worried co-worker, instead take the time to discuss with them why things happened the way they did and coach them to not make the same mistakes again.  Make them learn from their lesson rather than covering it up and making the same mistake again.

Tom’s Take

I’ve screwed up before.  Whether it was deleting mailboxes or creating a routing loop I think I’ve done my fair share of failing.  Object lessons are important because they quickly show the result of failure and give people a chance to learn from it.  You naturally feel embarrassed and upset when it happens.  So long as you gather your thoughts and channel all that frustration into learning from your mistake then things will work out.  It’s only the people that ignore the lesson or assume that the mistake was a one-time occurrence that will continually subject themselves to object lessons.  And those lessons will eventually hit home with the force of a sledgehammer.

Is It Time To Remove the VCP Class Requirement?

While I was at VMware Partner Exchange, I attended a keynote address. This in and of itself isn’t a big deal. However, one of the bullet points that came up in the keynote slide deck gave me a bit of pause. VMware is chaging some of their VSP and VTSP certifications to be more personal and direct. Being a VCP, this didn’t really impact me a whole lot. But I thought it might be time to tweet out one of my oft-requested changes to the certification program:

Oops. I started getting flooding with mentions. Many were behind me. Still others were vehemently opposed to any changes. They said that dropping the class requirement would devalue the certification. I responded as best I could in many of these cases, but the reply list soon outgrew the words I wanted to write down. After speaking with some people, both officially and unofficially, I figured it was due time I wrote a blog post to cover my thoughts on the matter.

When I took the VMware What’s New class for vSphere 5, I mentioned therein that I thought the requirement for taking a $3,000US class for a $225 test was a bit silly. I myself took and passed the test based on my experience well before I sat the class. Because my previous VCP was on VMware ESX 3 and not on ESX 4, I still had to sit in the What’s New course before my passing score would be accepted. To this day I still consider that a silly requirement.

I now think I understand why VMware does this. Much of the What’s New and Install, Configure, and Manage (ICM) classes are hands-on lab work. VMware has gone to great lengths to build out the infrastructure necessary to allow students to spend their time practicing the lab exercises in the courses. These labs rival all but the CCIE practice lab pods that I’ve seen. That makes the course very useful to all levels of students. The introductory people that have never really touched VMware get to experience it for real instead of just looking at screenshots in a slide deck. The more experienced users that are sitting the class for certification or perhaps to refresh knowledge get to play around on a live system and polish skills.

The problem comes that investment in lab equipment is expensive. When the CCIE Data Center lab specs were released, Jeff Fry calculated the list price of all the proposed equipment and it was staggering. Now think about doing that yourself. With VMware, you’re going to need a robust server and some software. Trial versions can be used to some degree, but to truly practice advanced features (like storage vMotion or tiering) you’re going to need a full setup. That’s a bit out of reach for most users. VMware addressed this issue by creating their own labs. The user gets access to the labs for the cost of the ICM or What’s New class.

How is VMware recovering the costs of the labs? By charging for the course. Yes, training classes aren’t cheap. You have to rent a room and pay for expenses for your instructor and even catering and food depending on the training center. But $3,000US is a bit much for ICM and What’s New. VMware is using those classes to recover the costs of the lab development and operation. In order to be sure that the costs are recovered in the most timely manner, the metrics need to make sense for class attendance. Given the chance, many test takers won’t go to the training class. They’d rather study from online material like the PDFs on VMware’s site or use less expensive training options like TrainSignal. Faced with the possiblity that students may elect to forego the expensive labs, VMware did what they had to so to ensure the labs would get used, and therefore the metrics worked out in their favor – they required the course (and labs) in order to be certified.

For those that say that not taking the class devalues the cert, ask yourself one question. Why does VMware only require the class for new VCPs? Why are VCPs in good standing allowed to take the test with no class requirement and get certified on a new version? If all the value is in the class, then all VCPs should be required to take a What’s New class before they can get upgraded. If the value is truly in the class, no one should be exempt from taking it. For most VCPs, this is not a pleasant thought. Many that I talked to said, “But I’ve already paid to go to the class. Why should I pay again?” This just speaks to my point that the value isn’t in the class, it’s in the knowledge. Besides VMware Education, who cares where people acquire the knowledge and experience? Isn’t a home lab just as good as the ones that VMware built.

Thanks to some awesome posts from people like Nick Marus and his guide to building an ESXi cluster on a Mac Mini, we can now acquire a small lab for very little out-of-pocket. It won’t be enough to test everything, but it should be enough to cover a lot of situations. What VMware needs to do is offer an alternate certification requirement that takes a home lab into account. While there may be ways to game the system, you could require a VMware employee or certified instructor or VCP to sign off on the lab equipment before it will be blessed for the alternate requirement. That should keep it above board for those that want to avoid the class and build their own lab for testing.

The other option would be to offer a more “entry level” certification with a less expensive class requirement that would allow people to get their foot in the door without breaking the bank. Most people see the VCP as the first step in getting VMware certified. Many VMware rock stars can’t get employed in larger companies because they aren’t VCPs. But they can’t get their VCP because they either can’t pay for the course or their employer won’t pay for it. Maybe by introducing a VMware Certified Administration (VCA) certification and class with a smaller barrier to entry, like a course in the $800-$1000US range, VMware can get a lot of entry level people on board with VMware. Then, make the VCA an alternate requirement for becoming a VCP. If the student has already shown the dedication to getting their VCA, VMware won’t need to recoup the costs from them.

Tom’s Take

It’s time to end the VCP class requirement in one form or another. I can name five people off the top of my head that are much better at VMware server administration than I am that don’t have a VCP. I have mine, but only because I convinced my boss to pay for the course. Even when I took the What’s New course to upgrade to a VCP5, I had to pull teeth to get into the last course before the deadline. Employers don’t see the return on investment for a $3,000US class, especially if the person that they are going to send already has the knowledge shared in the class. That barrier to entry is causing VMware to lose out on the visbility that having a lot of VCPs can bring. One can only hope that Microsoft and Citrix don’t beat VMware to the punch by offering low-cost training or alternate certification paths. For those just learning or wanting to take a less expensive route, having a Hyper-V certification in a world of commoditized hypervisors would fit the bill nicely. After that, the reasons for sticking with VMware become less and less important.

Cloud and the Death of E-Rate

Seems today you can’t throw a rock with hitting someone talking about the cloud.  There’s cloud in everything from the data center to my phone to my TV.  With all this cloud talk, you’d be pretty safe to say that cloud has its detractors.  There’s worry about data storage and password security.  There are fears that cloud will cause massive layoffs in IT.  However, I’m here to take a slightly different road with cloud.  I want to talk about how cloud is poised to harm your children’s education and bankrupt one the most important technology advantage programs ever.

Go find your most recent phone bill.  It doesn’t matter whether it’s a landline phone or a cell phone bill.  Now, flip to the last page.  You should see a minor line item labeled “Federal Universal Service Fee”.  Just like all other miscellaneous fees, this one goes mostly unnoticed, especially since it’s required on all phone numbers.  All that money that you pay into the Universal Service Fund is administered by the Universal Service Administrative Company (USAC), a division of the FCC.  USF has four divisions, one of which is the Schools and Libraries Division (SLD).  This portion of the program has a more commonly used name – E-Rate.  E-Rate helps schools and libraries all over the country obtain telecommunications and Internet access.  It accomplishes this by providing a fund that qualifying schools can draw from to help pay for a portion of their services.  Schools can be classified in a range of discount percentages, ranging from as low as 20% all the way up to 90% discount rates.  Those schools only have to pay $.10 on the dollar for their telecommunications services.  Those schools also happen to be the ones most in need of assistance, usually because of things such as rural location or other funding challenges.

E-Rate is divided into two separate pieces – Priority One and Priority Two.  Priority One is for telecommunications service and Internet access.  Priority One pays for phone service for the school and the pipeline to get them on the Internet.  The general rule for Priority One is that it is service-based only.  There usually isn’t any equipment provided by Priority One – at least not equipment owned by the customer.  Back in 1997, the first year of E-Rate, a T1 was considered a very fast Internet Circuit.  Today, most schools are moving past 10Mbit Ethernet circuits and looking to 100Mbit and beyond to satisfy voracious Internet users.  All Priority One requests must be fulfilled before Priority Two requests will begin to be funded.  One USAC starts funding Priority Two, they start at the 90% discount percentage and begin funding requests until the $2.25 billion allocated each year to the program is exhausted.  Priority Two covers Internal Connections and basic maintenance on those connections.  This is where the equipment comes in.  You can request routers, switches, wireless APs, Ethernet cabling, and even servers (provided they meet the requirements of providing some form of Internet access, like e-mail or web servers).  You can’t request PCs or phone handsets.  You can only ask for approved infrastructure pieces.  The idea is that Priority Two facilitates connectivity to Priority One services.  Priority Two allocations vary every year.  Some years they never fund past the 90% mark.  Two years ago, they funded all applicants.  It all depends on how much money is left over after all Priority One requests are satisfied.  There are rules in place to prevent schools from asking for new equipment every year to keep things fair.  Schools can only ask for internal connections two out of any five given years (the 2-of-5 rule).  In the other three years, they must ask for maintenance of that equipment.

There has always been a tug-of-war between what things should be covered under Priority One and Priority Two.  As I said, the general rule is that Priority One is for services only – no equipment.  One of the first things that was discussed was web hosting.  Web servers are covered under Priority Two.  A few years ago, some web hosting providers were able to get their services listed under Priority One.  That meant that schools didn’t have to apply to have their web servers installed under Priority Two.  They could just pay someone to host their website under Priority One and be done with it.  No extra money needed.  This was a real boon for those schools with lower discount percentages.  They didn’t have to hope that USAC would fund down into the 70s or the 60s.  Instead, they could have their website hosted under Priority One with no questions asked.  Remember, Priority One is always funded before Priority Two is even considered.  This fact has lead to many people attempting to get qualifying services setup under Priority One.  E-mail hosting and voice over IP (VoIP) are two that immediately spring to mind.  E-mail hosting goes without saying.  Priority One VoIP is new to the current E-Rate year (Year 15) as an eligible service.  The general idea is that a school can use a VoIP system hosted at a central location from a provider and have it covered as a Priority One service.  This still doesn’t cover handsets for the users, as those are never eligible.  It also doesn’t cover a local voice gateway, something that is very crucial for schools that want to maintain a backup just in case their VoIP connectivity goes down.  However, it does allow the school to have a VoIP phone system funded every year as opposed to hoping that E-Rate will fund low enough to cover it this year.

While I agree that access to more services is a good thing overall, I think we’re starting to see a slippery slope that will lead to trouble very soon.  ISPs and providers are scrambling to get anything and everything they can listed as a Priority One service.  Why stop at phones?  Why not have eligible servers hosted on a cloud platform?  Outsource all the applications you can to a data center far, far away.  If you can get your web, e-mail, and phone systems hosted in the cloud, what’s left to place on site in your school? Basic connectivity to those services, perhaps.  We still need switches and routers and access points to enable our connectivity to those far away services.  Except…the money.  Since Priority One always gets funded, everything that gets shoveled into Priority One takes money that could be used in Priority Two for infrastructure.  Schools that may never get funded at 25% will have their e-mail hosting paid for, while a 90% school that could really use APs to connect a mobile lab may get left out even though they have a critical need.  Making things Priority One just for the sake of getting them funded doesn’t really help when the budget for the program is capped from the beginning.  It’s already happening this year.  E-Rate Year 15 will only fund down to 90% for Priority Two.  That’s only because there was a carry over from last year.  Otherwise, USAC was seriously considering not funding Priority Two at all this year.  No internal connections.  No basic maintenance.  Find your own way schools.  Priority One is eating up the fund with all the new cloud services being considered, let alone with the huge increase in faster Internet circuits needed to access all these cloud services.  Network World recently had a report saying that schools need 100Mbps circuits.  Guess where the money to pay for those upgrades is going to come from?  Yep, E-Rate Priority One.  At least, until the money runs out because server hosting is a qualifying service this year.

Most of the schools that get E-Rate funding for Priority Two wouldn’t be able to pay for infrastructure services otherwise.  Unlike large school districts, these in-need schools may be forced to choose between adding a switch to connect a lab and adding another AP to cover a classroom.  Every penny counts, even when you consider they may only be paying 10-12% of the price in the first place.  If Priority One services eat up all the funding before we get to Priority Two, it may not matter a whole lot to those 90% schools.  They may not have the infrastructure in place to access the cloud.  Instead, they’ll limp along with a T1 or a 10Mbps circuit, hoping that one day Priority Two might get funded again.

How do we fix this before cloud becomes the death mask for E-Rate?  We have to ensure that USAC knows that hosting services need to be considered separately from Priority One.  I’m not quite sure how that needs to happen, whether it needs to be a section under Priority Two or if it needs to be something more like Priority One And A Half.  But lumping hosted VoIP in with Internet access simply because there is no on-site equipment isn’t the right solution.  Since a large majority of the schools that qualify for E-Rate are lower elementary schools, it makes sense that they have the best access to the Internet possible, along with good intra-site connectivity.  A gigabit Internet circuit doesn’t amount to much if you are still running on 10Mbps hubs (don’t laugh, it’s happened).  If USAC can’t be convinced that hosted services need to be separated from other Priority One access, maybe it’s time to look at raising the E-Rate cap.  Every year, the amount of requests for E-Rate is more than triple the funding commitment.  That’s a lot of paperwork.  The $2.25 billion allocation set forth in 1997 may have been a lot back then, but looking at the number of schools applying today, it’s just a drop in the bucket.  E-Rate isn’t the only component of USF, and any kind of increase in funding will likely come from an increase in the USF fees that everyone pays.  That’s akin to raising taxes, which is always a hot button issue.  The program itself has even come under fire both in the past and in recent years due to mismanagement and fraud.  I don’t have any concrete answers on how to fix this problem, but I sincerely hope that bringing it to light helps shed some light on the way that schools get their technology needs addressed.  I also hope that it makes people take a hard look at the cloud services being proposed for inclusion in E-Rate and think twice about taking an extra bucket of water from the well.  After all, the well will run dry sooner or later.  Then everyone goes thirsty.


I am employed by a VAR that focuses on providing Priority Two E-Rate services for schools.  The analysis and opinions expressed in this article do not represent the position of my employer and are my thoughts and conclusions alone.

Mental Case – In a Flash(card)

You’ve probably noticed that I spend a lot of my time studying for things.  Seems like I’ve always been reading things or memorizing arcane formulae for one reason or another.  In the past, I have relied upon a large number of methods for this purpose.  However, I keep coming back to the tried-and-true flash card.  To me, it’s the most basic form of learning.  A question on the front and an answer on the back is all you need to drill a fact into your head.  As I started studying for my CCIE lab exam, this was the route that I chose to go down when I wanted to learn some of the more difficult features, like BGP supress maps or NTP peer configurations.  It was a pain to hand write all that info out on my cards.  Sometimes it didn’t all fit.  Other times, I couldn’t read my own writing.  I wondered if there was a better solution.

Cue my friend Greg Ferro and his post about a program called Mental Case.  Mental Case, from Mental Faculty, is a program designed to let you create your own flashcards.  The main program runs on a Mac computer and allows you to create libraries of flash cards.  There are a lot of good example sets when you first launch the app for things like languages.  But, as you go through some of the other examples, you can see the power that Mental Case can give you above and beyond a simple 3″x5″ flash card.  For one thing, you can use pictures in your flash cards.  This is handy if you are trying to learn about art or landmarks, for instance.  You could also use it as a quick quiz about Cisco Visio shapes or wireless antenna types.  This is a great way to study things more advanced than just simple text.

Once you dig into Mental Case, though, you can see some of the things that separate it from traditional pen-and-paper.  While it might be handy to have a few flash cards in your pocket to take out and study when you’re in line at the DMV, more often than not you tend to forget about them.  Mental Case can setup a schedule for you to study.  It will pop up and tell you that it’s time to do some work.  That’s great as a constant reminder of what you need to learn.  Another nice feature is the learning feature.  If you have ever used flash cards, you probably know that after a while, you tend to know about 80% of them cold with little effort.  However, there are about 20% that kind of float in the middle of the pack and just get skipped past without much reinforcement.  They kind of get lost in the shuffle, so to speak.  With Mental Case, those questions which you get wrong more often get shuffled to the front, where your attention span is more focused.  Mental Case learns the best ways to make you learn best.  You can also set Mental Case to shuffle or even reverse the card deck to keep you on your toes.

When you couple all of these features with the fact that there is a Mental Case IOS client as well as a desktop version, your study efficiency goes through the roof.  Now, rather than only being able to study your flash cards when you are at your desk, you can take them with you everywhere.  When you consider that most people today spend an awful lot of time staring at their iPhones and iPads, it’s nice to know that you can pull up a set of flash cards from your mobile device and go to town at a moment’s notice, like in the line at the DMV.  In fact, that’s how I got started with Mental Case.  I downloaded the IOS app and started firing out the flash cards for things like changing RIP timers and configuring SSM.  However, the main Mental Case app only runs on Mac.  At the time, I didn’t have a Mac?  How did I do it?  Well, Mental Case seems to have thought of everything.  While the IOS app works best in concert with the Mac app, you can also create flash cards on other sites, like FlashcardExchange and Quizzlet.  You can create decks and make them publicly available for everyone, or just share them among your friends.  You do have to make the deck public long enough to download to Mental Case IOS, but it can be protected again afterwards if you are studying information that shouldn’t be shared with the rest of the world.  Note, though, that the IOS version of the software is a little more basic than the one on the Mac.  It doesn’t support wacky text formatting or the ability to do multiple choice quizzes.  Also, cards that are created with more than two “sides” (Mental Case calls them facets) will only display properly in slideshow mode.  But, if you think of the IOS client as a replacement for the stack of 10,000 flash cards you might already be carrying in your backpack or pocket the limitations aren’t that severe after all.

The latest version of Mental Case now has the option to share content between Macs via iCloud.  This will allow you to keep your deck synced between your different computers.  You still have to sync the cards between your Mac and your IOS device via Wi-Fi.  You can share at shorter ranges over Bluetooth.  You can also create collection of cards known as a Study Archive and place them in a central location, like Dropbox for instance. This wasn’t a feature when I was using Mental Case full time, but I like the idea of being able to keep my cards in one place all the time.

Mental Case is running a special on their software for the next few days.  Normally, the Mac version costs $29.99.  That’s worth every penny if you spend time studying.  However, for the next few days, it’s only $9.99.  This is a steal for such a powerful study program.  The IOS app is also on sale.  Normally $4.99, it’s just $2.99.  Alone the IOS app is a great resource.  Paired with its bigger brother, this is a no-brainer.  Run out and grab these two programs and spend more time studying your facts and figures efficiently and less time creating them.  If you’d like to learn more about Mental Case from Mental Faculty, you can check out their webiste at


I am a Mental Case IOS user.  I have used the demo version of the Mental Case Mac app.  Mental Case has not contacted me about this review, and no promotional consideration was given.  I’m just a really big fan of the app and wanted to tell people about it.

Networking Is Not Trivia(l)

Fun fact: my friends and family have banned me from playing Trivial Pursuit.  I played the Genus 4 edition in college so much that I practically memorized the card deck.  I can’t play the Star Wars version or any other licensed set.  I chalk a lot of this up to the fact that my mind seems to be wired for trivia.  For whatever reason, pointless facts stick in my head like glue.  I knew what an aglet was before Phinneas & Ferb.  My head is filled with random statistics and anecdotes about subjects no one cares about.  I’ve been accused in the past of reading encyclopedias in my spare time.  Amusingly enough, I do tend to consume articles on Wikipedia quite often.  All of this lead me to picking a career in computers.

Information Technology is filled with all kinds of interesting trivia.  Whether it’s knowing that Admiral Grace Hopper coined the term “bug” or remembering that the default OSPF reference bandwidth is 100 Mb, there are thousands of informational nuggets laying around, waiting to be discovered and cataloged away for a rainy day.  With my love of learning mindless minutia, it comes as no surprise that I tend to devour all kinds of information related to computing.  After a while I started to realize that simply amassing all of this information doesn’t do any good for anyone.  Simply remembering that EIGRP bandwidth values are multiplied by 256 doesn’t do any good without a bigger picture of realizing it’s for backwards compatibility with IGRP.  The individual facts themselves are useless without context and application.

I tried to learn how to play the guitar many years ago.  I went out and got a starter acoustic guitar and a book of chords and spent many diligent hours practicing the proper fingering to make something other than noise.  I was getting fairly good at producing chords without a second thought.  It kind of started falling apart when I tried to play my first song, though.  While I was good at making the individual notes, when it came time to string them together into something that sounded like a song I wasn’t quite up to snuff.  In much the same way, being an effective IT professional is more than just knowing a whole bunch of stuff.  It’s finding a way to take all that knowledge and apply it somehow.  You need to find a way to take all those little random bits of trivia and learn to apply them to problems to fix things efficiently.  People that depend on IT don’t really care what the multicast address for RIPv2 updates is.  What they want is a stable routing table when they have some sort of access list blocking traffic.  It’s up to us to make a song out of all the network chords we’ve learned.

It’s important to know all of those bits of trivia in the long run.  They come in handy for things like tests or cocktail party anecdotes.  However, you need to be sure to treat them like building blocks.  Take what you need to form a bigger picture.  You won’t become bogged down in the details of deciding what parts to implement based on sheer knowledge alone.  Instead, you can build a successful strategy.  Think of the idea of the gestalt – things are often greater than the sum of their parts.  That’s how you should look at IT-related facts.

Tom’s Take

I’m never going to stop learning trivia.  It’s as ingrained into my personality as snark and sarcasm.  However, if I’m going to find a way to make money off of all that trivia, I need to be sure to remember that factoids are useless without application.  I must always keep in mind that solutions are key to decision makers.  After all, the snark and sarcasm aren’t likely to amount to much of career.  At least not in networking.

More Technical Presentation Tips

As an engineer for a Value-Added Reseller (VAR) as well as a frequent Tech Field Day delegate and technical presenter, I spend a lot of my time listening to presentations.  I often find myself critiquing them for things like speaker delivery and content.  I feel that it’s my duty to share some of my thoughts on presenting and presentation structure, especially when you choose to talk to a group of technical people.  I’ve already talked about some presentation tips before, so what follows are a couple of new things that I’ve been thinking about for the last year or so.

Time Is Not On Your Side

One of the biggest concerns that I’ve seen with technical presentations as of late is the time issue.  People are typically given a one or two hour presentation slot depending on the event I am attending or presenting at.  The presenter then proceeds to fill the entire time with slide decks and lecture.  Every minute of the presentation is accounted for by a bullet point or a fancy animated slide.  Should someone disrupt the flow of the presenter’s zen with a question or a request for clarification, they are met either with a curt answer or a request to hold all questions until the end of the session.  After the end of the presentation, there is usually very little time for Q&A.

Nowhere was this more apparent to me than at the recent Network Field Day 3.  We managed to gather a great group of individuals once again to listen to industry experts talk to us about great new technologies.  However, for the first time that I can remember, we had a group that was willing to start peppering away with questions not even five minutes into the presentation.  Between Ivan Pepelnjak (@ioshints) and Marko Milivojevic (@icemarkom), there were some very good back-and-forth discussions going on.  I love these kinds of discussions.  They really show how people can take a point and launch from it into a rabbit hole of technical brilliance.  The problem with these discussions come when you have the aforementioned presenters that have filled every minute with a slide.  There’s no room to freestyle and talk about things.  Occasionally, you have companies like Metageek come along and do something totally off the wall.  They want to listen as much as they want to present.  At Wireless Field Day 2, Ryan and Trent spent quite a bit of time talking to the delegates and getting feedback.  I’d say the last twenty minutes of their presentation was spent posing questions rather than answering them.  I found this refreshing.  So refreshing, in fact, that my presentation over cloud computing not a month later got slashed from it’s allotted hour of time down to around 45-50 minutes.  Why?  I wanted to get good feedback from my audience.  I wanted to field questions as they came in and not worry about running out of time to get to my last slide.  I wanted to be sure that my presentation involved the audience as much as possible.  I think that’s a key the needs to be taken forward for presenters.  Don’t look at your time slot as a container to fill to the brim with your own ideas.  Instead, take a cue from the coffee bars of the world and pour your slot almost full.  Leave some room for questions and discussion, which are just like the sweetener and cream I pour in my coffee.  Aim for 75-80% of your time slot for presentation.  The rest should be for your audience.  Even if you don’t get a lot of questions about your presentation, at least the people will be happy that they got out fifteen minutes early and they don’t have to rush to their next session.  Either way, your audience will love you.

Live By The Demo, Die By The Demo

Oh, the demo.  How I love thee.  No boring slide deck.  No relentless bullet points.  All the joy of seeing something work in real life.  But, at the same time I hate the demo.  Too much chance for failure.  Too easy for things to go off the rails and result in a wandering audience.  How then do we reconcile the good things about a demo with all the possible downsides?

The key to giving a good demo is to make it flow.  Come up with a script for your tour that moves the viewers seamlessly from one area to the next.  It should feel connected and coherent.  You should leave some time for improvisation in case your audience finds an area where they would like to spend some more time focusing.  However, these rabbit holes are the first sign that the demo pitfalls are coming soon.  It’s all too easy to waste time talking about a specific feature and lose sight of the big picture.  When that happens, you get lots of sidebar conversations between your audience.  When the people you are talking to spend more time talking to each other, you’ve lost control.  You need to find a way to bring things back to you.  It’s also important to note that technical people hate watching progress bars and incrementing counters.  If your demo is going to require time to load a program or push out a firmware, consider kicking it off early in your presentation and then talking more about a specific feature or fielding questions while it goes on in the background.  Infineta did this at Network Field Day 3.  Rather than let us watch the couple of hundred gigabytes of traffic flooding across a boring screen, they instead kicked off the demo and let it run in the background while they melted our brains with algorithm math.  When we had been beaten into submission by formulae, we flipped back over to see the results of the live demo.  All the benefits of a real walkthrough without any wasted time.

Tom’s Take

There’s no such thing as a perfect presentation.  It’s goal that we all strive for but can never really accomplish.  That’s not to say we as presenters can’t give it our best shot.  I’m not saying these tips will apply to you.  In fact, a large portion of the presentations that I do either don’t involve a demo or don’t have a place for one.  They key is to recognize that a live (or simulcasted) audience isn’t a group of mindless drones that will absorb your every word without question.  You should do your best to involve and include them at every step of the way.  When the audience feels they have a choice in the content and direction, they’ll be more involved and happier in the end.

Partly Cloudy – A Hallmark Presentation

One of the joys of working for an education-focused VAR is that I get to give technical presentations.  More often than not, I try to get a presentation slot at the Oklahoma Technology Association annual conference.  I did one last year over IPv6 to a packed house…of six people.  This year, I jumped at the chance to grab a slot and talk about something new and different.

The Cloud.

Yes, I figured it was about time to teach the people in education about some of the basics behind cloud.  When the call for presentations came out, I registered almost immediately.  This year, I had 12 months worth of analysis and experience at Tech Field Day to drive me in my presentation preparations.  The first think I knew I needed to do was come up with a catchy title.  People get numbed to the descriptive, SEO-friendly titles that get put on conference agendas.  As you can tell from the titles of my blog posts, I want something that’s going to pop.  I decided to sort of theme my presentation after a weather report.   Therefore, calling it “Partly Cloudy” seemed like a no-brainer.  I added “Forecast For Your Technology Future” as a subtitle to ensure that people didn’t think I was talking strictly about meteorology.  I spent a bit of time laying out slides and putting some thoughts down.  I hate when people read their bullet points from a slide deck, so I use mine more as discussion points.  They serve as a way to keep me on track and help focus me on what I want to say to my audience.  I also decided to do something fun for the audience.  I shamelessly stole this idea from Cisco Press author Tim Szigeti.  Tim wrote a very good guide to QoS and when he gives a presentation at Cisco Live, he gives away a copy of said book to the first person to ask a question during his presentation.  I loved the idea and wanted to do something similar.  However, I’m not an author.  I wracked my brain trying to come up with a good idea.  That was where I came up with the idea of using an umbrella as a prop.  You’ll see why in just a minute.

When I got to the room to do my presentation, I was astonished.  There were almost 90 people in the audience!  I got a little jittery from realizing how many people were there, especially the ones I didn’t know.  I got everything setup and started my video camera so I could go back after the fact and not only post about it on my blog, but have a reference for what I did right and what I could have done better.  Here’s me:

If you’d like to follow along with my slide deck, you can download the PDF HERE.

Compared to last year, I desperately wanted to avoid using the word “so” as much as I did.  I practiced a lot to try and leave it out as a pause word or a joining word.  If you’ve ever talked to me in real life, you can understand how hard that is for me.  Unfortunately, I think I jumped on the word “hallmark” and used it a little more than I should.  Not sure why I did that to be honest.  But as far as things go, it could have been much worse.  One thing that did unnerve me a little was the fact that people started walking out of my presentation about about ten minutes.  Having left a few presentations early in my lifetime, I started thinking in the back of my mind what could be causing people to leave.  Was I boring?  Was the subject matter too elementary?  Did people just hate the sound of my voice?  All in all, about twenty people left before the end, although to be honest if my company hadn’t been giving away a gift card, it might have been higher than that.  I caught up with several of the early departures during the conference and asked them why they decided to bail.  Their response was almost universal and caught me a little off guard – “You were just talking way over our heads.”  I had never even considered that approach.  I’d spent so much time making sure my content touched on many areas of the cloud that I forgot most of my audience doesn’t talk to Christofer Hoff (@Beaker) about cloud regularly.  My audience consisted of people that found out about cloud technology from a Microsoft commercial or on their new iPhone.  These people don’t care about instantiation of vCloud Director instances or vApp deployments.  They’re still amazed they can put a contact on their iPhone and have it show up on their iPad.  That was my failing.  I never want to be the guy that talks down to an audience.  In this case, however, I think I needed to take a step back and ensure my audience was on the same ground I was on when it came to talking about the cloud.  Lesson learned.

There were a number of other little things that bugged me.  I didn’t like standing behind a lectern since I’m usually an animated presenter.  However, the room design forced me to have a microphone.  I was forced to insert a couple of things into my slides.  I’ll let you guess where those were.  Overall though, I was complimented by several audience members and I had lots of people come up to me afterwards and ask me questions about cloud-based software and virtualization.  I think I’m going to do another one of these at the Fall OTA conference focused on something like virtual desktop infrastructure.  This time I’ll have demos.  And fewer weather-related jokes.

Feedback from my readers is always welcome.  I value each opinion about my presentation and I always strive to get better at them.  I doubt I’ll ever be the most effective public speaker out there, but I want to avoid boring most people to death.