Juniper MX Series – Review

A year ago I told myself I needed to start learning Junos.  While I did sign up for the Fast Track program and have spent a lot of time trying to get the basics of the JNCIA down, I still haven’t gotten around to taking the test.  In the meantime, I’ve had a lot more interaction with Juniper users and Juniper employees.  One of those was Doug Hanks.  I met him at Network Field Day 4 this year.  He told me about a book that he had recently authored that I might want to check out if I wanted to learn more about Junos and specifically the MX router platform.  Doug was kind enough to send me an autographed copy:

MX Series Cover

The covers on O’Reilly books are always the best.  It’s like a zoo with awesome content inside.

This is not a book for the beginner.  Frankly, most O’Reilly press books are written for people that have a good idea about what they’re doing.  If you want to get your feet wet with Junos, you probably need to look at the Day One guides that Juniper provides free of charge.  When you’ve gone through those and want to step up to a more in-depth volume you should pick up this book.  It’s the most extensive, exhaustive guide to a platform that I’ve ever seen in a very long time.  This isn’t just an overview of the MX or a simple configuration guide.  This book should be shipped with every MX router that leaves Sunnyvale.  This is a manual for the TRIO chipset and all the tricks you can do on it.

The MX Series book does a great job of not only explaining what makes the MX and TRIO chipset different, but also how to make it perform at the top of its game.  The chapter on Class of Service (CoS) alone is worth its weight in gold.  That topic has worried me in the past because of other vendor’s simplified command line interfaces for Quality of Service (QoS).  This book spells everything out in a nice orderly fashion and makes it all make more sense than I’ve seen before.  I’m pretty sure those pages are going to get reused a lot as I start my journey down the path of Junos.  But just because the book make things easy to understand doesn’t mean that it’s shallow on technical knowledge or depth.  The config snippet for DDoS mitigation is fifteen pages long!  That’s a lot of info that you aren’t going to find in a day one guide.  And all of those chapters are backed up with case studies.  It’s not enough that you know how to configure some obscure command.  Instead, you need to see where to use it and what context makes the most sense.  That’s where these things hit home for me.  I was always a fan of word problems in math.  Simple formulas didn’t really hit home for me.  I needed an example to reinforce the topic.  This book does an outstanding job of giving me those case studies.


Tom’s Take

The Juniper MX Series book is now my reference point for what an deep dive tome on a platform should look like.  It covers the technology to a very exhaustive depth without ever really getting bogged down in the details.  If you sit down and read this cover to cover, you will come away with a better understanding of the MX platform that anyone else on the planet except perhaps the developers.  That being said, don’t sit down and read it all at once.  Take the time to go into the case studies and implement them on your test lab to see how the various features interact together.  Use this book as an encyclopedia, not as a piece of fireside reading material.  You’ll thank yourself much later when you’re not having dreams of CoS policies and tri-color policers.

Disclaimer

This copy of Juniper MX Series was provided to me at no charge by Doug Hanks for the purpose of review.  I agreed with Doug to provide an unbiased review of his book based on my reading of it.  There was no consideration given to him on the basis of providing the book and he never asked for any when providing it.  The opinions and analysis provided in this review reflect my views and mine alone.

Unlearning IPv4

"You must unlearn what you have learned." -Yoda

“You must unlearn what you have learned.” -Yoda

As network rock stars, we’ve all spent the majority of our careers learning how to do things.  We learn how to address interfaces and configure routing protocols.  For many of those out there, technology has changed often enough that we often find ourselves need to retrain.  Whether it be a new version of an old protocol or an entirely new way of thinking about things, there will always come a time when it’s necessary to pick up new knowledge.  However, in the case of updated knowledge it’s often difficult to process.  That’s because the old way of doing things interposes itself in our brains while we’re learning to do it the new way.  How many times have you been practicing something only to hear a little voice in the back of your head saying, “That’s not right.  You should be doing it this way.”  In many ways, it’s like trying to reprogram the pathway in your brain that leads to the correct solution to your problem.

This is very apparent to me when it comes to learning how to configure and setup IPv6 on a network.  Those just starting out in the big wide world of IPv6 need to have some kind of reference point to start configuring things, so they tend to lean back on their IPv4 training in order to get started.  This can work for some applications.  For others, though, it can be quite detrimental to getting IPv6 running the way it should.  Instead of carrying forward the old way of doing things because “that’s just the way they should be done,” you need to start unlearning IPv4.  The little green guy in Empire Strikes Back hit the nail on the head.  The whiney farm boy had spent so much of his life convinced that something was impossible that he couldn’t conceive that someone could lift his starship out of a swamp with the Force.  He had to unlearn that lifting things with his mind was impossible.  Once you take that little step, nothing can stop you from accomplishing anything.

With that in mind, here are a few things that need to be unlearned from our days working with IPv4.  Note that this won’t be easy.  But nothing worth doing is ever easy.

Address Conservation – This one is the biggest stumbling block today.  Look at all the discussion we’ve got around point-to-point links and whether to address them with a /64 or a /127 bit mask.  People claim that addressing this link with a /64 wastes addresses.  To  quote the old guy in the desert from Star Wars, “It’s true, depending on your point of view.”  In a given /64, there are approximately 18 quadrillion addresses available (I’m rounding to make the math easy).  If you address a point-to-point link with a /64, you’re only going to be using 0.0000000000000000001% of those addresses (thats 1 * 10^-19).  To many, that’s a pretty big waste.  But with numbers that big, your frame of reference gets screwed up.  By example, take a subnet with 4,094 hosts, which today would need /20 in IPv4.  That’s about the biggest single subnet I can imagine creating.  If you address that 4,094 host subnet with a /64 in IPv6, you’d end up using 0.0000000000000002% (2 * 10^-16) of the address space.  Waste is all a matter of perspective.  On the other hand, by addressing a link with a bit mask beyond a /64, we break neighbor discovery and secure neighbor discovery and PIM sparse mode with embedded RP among other things.  We need to unlearn the address conservation mentality and instead concentrate on making our networks easier to configure and manage.

Memorizing IP addresses – I’m guilty of this.  I spend a lot of time working at the command line with IPv4, whether it be via telnet or SSH or even just plugging numbers into a GUI.  My CUCM systems are setup to use IP only.  I memorize the addresses of my servers, or in many cases try to make this as similar mnemonically to other systems to jog my memory about where to find them in IP space.  In IPv6, memorizing addresses is going to be impossible.  It’s hard enough for me to remember non-RFC1918 address space as it is with 4 octets of decimal numbers.  Now quadruple that and add in hex addressing.  And when it comes to workstations with SLAAC or DHCPv6 assigned addresses?  Forget about it.  Rather than memorizing address space, we’re going to need to start using DNS for communications between endpoints.  Yes, that means setting up DNS for all your routers and CUCM servers too.  It’s going to be a lot of extra work up front.  It’ll pay off in the long run, though.  I’m sure you’d much rather refer to CUCM1.local rather than trying to remember fe80::ba8d:12ff:fe0b:8aff every time you want to get to the phone server.

Subnet Masks – Never again will you need to see 255 in an IPv6 address unless it’s part of the address.  Subnet masking is dead and buried.  Instead, bit masks and slash notation rule the day.  This is going to be one of the most welcome changes in IPv6, but I think it’s going to take a long time to unlearn.  Not really as much for network engineers, but mainly for the people that have ancillary involvement with networking, such as the server people.  Think about the number of server admins that you’ve talked to that have memorized that the subnet mask of their network card is 255.255.255.0.  Now, ask them what that means. Odds are good they can’t tell you.  Worse, some of them might say that it’s a Class C subnet mask.  It’s a little piece of anecdotal information that they heard once when the network folks were talking that they just picked up.  Granted, most of the time the servers are going to be addresses with a /64 bit mask on the IPv6 address.  That’s still going to take a while to explain to the non-networking people.  No, you don’t need any more 255s in your address.  Yes, the /64 is the same as that, sort of.  No, there’s math involved.  Yes, I’ll take care of all the math.

Ships in the Night – As I said on my recent appearance on the Class C block podcast, I think it’s high time that networking vendors stop treating IPv4 and IPv6 like they are separate entities.  I know that I’ve spent the better part of this blog post talking about how IPv4 and IPv6 require a difference in application and not carrying across old habits and conventions.  The two protocols are more alike that they are different.  That means that we need to stop thinking of IPv6 as an afterthought.  Take a look at the CCIE.  There’s still a separate section for IPv6.  It feels like it was just a piece that was added on to the end of the exam instead of being integrated into the core lab.  Look at Kurt Bales’ review of the JNCIE lab that he took.  Specifically, the last bullet point.  You could be asked to configure something on either IPv4 or IPv6, or even both!  Juniper understands that the people taking the JNCIE today aren’t going to have the luxury of concentrating on just IPv4.  The world is going to require us to use IPv6, so I think it’s only fair that our certification programs start doing the same.  IPv6 should be integrated into every level of certification from CCNA/JNCIA all the way up to CCIE/JNCIE.


Tom’s Take

Working with IPv6 is a big change from the way we’ve done things in the past.  With SLAAC and integrated IPSec, the designers have done a great job of making our lives easier with things that we’ve needed for a long time.  However, we’re doing our best to preclude our transition to IPv6 by carrying over a lot of baggage from IPv4.  I know that our brains look for patterns and like to settle on familiarity as a way to help train for new challenges.  If we aren’t careful, we’re going to carry over too much of the old familiar networking and make IPv6 difficult to work with.  Unlearning what we think we know about networking is a good first step.  A person may learn something quickly with familiarity, but they can learn even faster when they approach it with a blank slate and a keen interest to learn.  With that approach, even the impossible won’t keep you from succeeding.

The Five Stages of IPv6 and NAT

I think it’s time to put up a new post on IPv6 and NAT.  Mainly because I’m still getting comments on my old NAT66 post from last year.  I figured it would be nice to create a new place for people to start telling me how necessary NAT is for the Internet of the future.

In the interim, though, I finally had a chance to attend the Texas IPv6 Task Force Winter 2012 meeting.  I got to hear wonderful presentations from luminaries such as John Curran of ARIN, Owen DeLong of Hurricane Electric, and even Jeff Doyle of Routing TCP/IP book fame.  There was a lot of great discussion about IPv6 and the direction that we need to be steering adoption of the new address paradigm.  I also got some very interesting background about the formation of IPv6.  When RFC 1550 was written to start soliciting ideas about a new version of IP, the Internet was a much different place.  Tim Berners-Lee was just beginning to experiment with HTTP.  The majority of computers connected to the Internet used FTP and Telnet.  Protocols that we take for granted today didn’t exist.  I knew IPSec was a creation of the IPv6 working group.  But I didn’t know that DHCP wasn’t created yet (RFC 2131).  Guess what?  NAT wasn’t created yet either (RFC 1631).  Granted, the IPng (IPv6) informational RFC 1669 was published after NAT was created, but NAT as we know and use it today wasn’t really formalized until RFC 2663.  That’s right, folks.

The reason NAT66 doesn’t exist is because IPv6 was built at a time when NAT didn’t exist.

It’s like someone turned on a lightbulb.  That’s why NAT66 has always felt so wrong to me. Because the people that created IPv6 had no need for something that didn’t exist.  IPv6 was about creating a new protocol with advanced features like automatic address configuration and automatic network detection and assignment.  I mean, take a look at the two IPv6 numbering methods.  Stateless Automatic Autoconfiguration (SLAAC) can assign all manner of network information to a host.  I can provide prefixes and gateways and even default routes.  However, the one thing that I can’t provide in basic SLAAC is a DNS server entry.  In fact, I can’t provide any of the commonly assigned DHCP options, such as NTP server or other vendor-specific fields.  SLAAC is focused solely on helping hosts assign addresses to themselves and get basic IP connectivity to the global Internet.  Now, take DHCPv6.  This stateful protocol can keep track of options like DNS server or NTP server.  It can also provide a database of assignments so I know which machine has which IP.  But you know what critical piece of information it can’t provide?  A default router.  That’s right, DHCPv6 has no method of assigning a default router or gateway to an end node.  I’m sure that’s due to the designers of DHCPv6 knowing that SLAAC and router advertisements (RA) handle the network portion of things.  The two protocols need to work together to get hosts onto the internet.  In 1995, that was some pretty advanced stuff.  Today, we think auto addressing and network prefix assignment is pretty passé.

Instead of concentrating on solving the dilemma of increasing the adoption rate of IPv6 past the 1% mark where it currently resides, we’ve instead turned to the Anger and Bargaining phases of the Küber-Ross model, otherwise known as the Five Stages of Grief. The need for IPv6 can no longer be denied.  The reality of running out of IPv4 addresses is upon us.  Instead, we lash out against that which we don’t understand or threatens us.  IPv6 isn’t ready for real networking.  There are security risks.  End-to-end communications aren’t important.  IPv6 is too expensive to maintain.  People aren’t smart enough to implement it.  Any of those sound familiar?  Maybe not those exact words, but I’ve heard arguments very similar to that leveled at IPv6 in just the two short years that I’ve been writing.  Think about how John Curran of ARIN must feel twenty years after he started working on the protocol.

Anger is something I can handle.  Getting yelled at or called expletives is all part of networking.  It’s the Bargaining phase that scares me.  Now, armed with a quiver of use cases that perhaps 5% of the population will ever take advantage of, we now must delay adoption or move to something entirely different to support those use cases.  It’s the equivalent of being afraid to jump off a diving board because there is a possibility that the water will drain out of the pool on the way down.  The most diabolical is Carrier Grade NAT.  Let’s NAT our NATed networks to keep IPv4 around just a little longer.  It won’t cause that many problems, really.  After all, we’ve only got 65,536 ports that we can assign for any given PAT setup.  So if we take that limit and extend it yet another level, we have 65,536 PATed PAT translations that we can assign per CGN gateway.  That has real potential to break applications, and not just from an end-to-end connectivity point of view. To prove my point, fire up any connection manager and go to http://maps.google.com.  See how many separate connection requests are spawned when those map tiles start loading.  Now, imagine what would happen if you could only load ten or fifteen of them.  There’s going to be a lot of blank spots on the that map.

Now, for the fun part.  I’ve been accused of hating NAT.  Yes, it’s true.  I dislike any protocol that breaks basic connectivity and causes headaches for troubleshooting and end-to-end communications.  I have to live with it in IPv4.  I’d rather not see it carried forward.  That’s the feeling of many IPv6 evangelists.  If you think I dislike NAT, ask Owen DeLong his feelings on the subject.  However, to say that I dislike NAT for no good reason is silly.  People are angry at me for saying the emperor has no clothes.  Every time I discuss the lack of need for NAT66, the same argument gets thrown in my face.  Ivan Pepelnjak wrote an article about about using network prefix translation (NPT) in a very specific case.  If you are multihoming your network to two different providers and not using BGP then a case for NPT can be made.  It’s not the best solution, but it’s the easiest.  Much like Godwin’s Law, as the length of any NAT66 argument increases, the probability of someone bringing up Ivan’s article approaches approaches one.

So, I’ve found a solution to the problem.  I’m going to fix this one scenario.  I’m going to dedicate my time to solving the multihoming without BGP issue.  When I do that, I expect choirs of angels to sing and a chariot pulled by unicorns to arrive at my home to escort me to my new position of Savior of IPv6 Adoption.  More realistically, I expect someone else to find a corner case rationale for why IPv6 isn’t the answer.  Of course, that’s just another attempt at bargaining.  By that point, I’ll have enough free time to solve the next issue.  Until then, I suggest the following course of action:

BYOD vs MDM – Who Pays The Bill?

Generic Mobile Devices

There’s a lot of talk around now about the trend of people bringing in their own laptops and tablets and other devices to access data and do their jobs.  While most of you (including me) call this Bring Your Own Device (BYoD), I’ve been hearing a lot of talk recently about a different aspect of controlling mobile devices.  Many of my customers have been asking me about Mobile Device Management (MDM).  MDM is getting mixed into a lot of conversations about controlling the BYoD explosion.

Mobile Device Management (MDM) refers to the process of controlling the capabilities of a device via a centralized control point, whether it be in the cloud or on premises.  MDM can restrict functions of a device, such as the camera or the ability to install applications.  It can also restrict which data can be downloaded and saved onto a device.  MDM also allows device managers to remotely lock the device in the event that it is lost or even remotely wipe the device should recovery be impossible.  Vendors are now pushing MDM is a big component of their mobility offerings.  Every week, it seems like some new vendor is pushing their MDM offering, whether it be a managed service software company, a wireless access point vendor, or even a dedicated MDM provider.  MDM is being pushed as the solution to all your mobility pain points.  There’s one issue though.

MDM is a very intrusive solution for mobile devices.  A good analogy might be the rules you have for your kids at home.  There are many things they are and aren’t allowed to do.  If they break the rules, there are consequences and possible punishments.  Your kids have to follow your rules if they live under your roof.  Such is the way for MDM as well.  Most MDM vendors that I’ve spoken to in the last three months take varying degrees of intrusion to the devices.  One Windows Mobile provider started their deployment process with a total device wipe before loading an approved image onto the mobile device.  Others require you to trust specific certificates or enroll in special services.  If you run Apple’s iOS and designate the device as a managed device in iOS 6 to get access to certain new features like the global proxy setting, you’ll end up having a wiped device before you can manage it.  Services like MobileIron can even give administrators the ability to read any information on the device, regardless of whether it’s personal or not.

That level of integration into a device is just too much for many people bringing their personal devices into a work environment.  They just want to be able to check their email from their phone.  They don’t want a sneaky admin reading their text messages or even wiping their entire phone via a misconfigured policy setting or a mistaken device loss.  Could you image losing all your pictures or your bank account info because Exchange had a hiccup?  And what about pushing MDM polices down to disable your camera due to company policy or disable your ability to make in-app purchases from your app repository of choice?  How about setting a global proxy server so you are restricted from browsing questionable material from the comfort of your own home?  If you’re like me, any of those choices make me cringe a little.

That’s why BYoD polices are important.  They function more like having your neighbor’s children over at your house.  While you may have rules for your children, the neighbor’s kids are just vistors.  You can’t really punish them like you’d punish your own kids.  Instead, you make what rules you can to prevent them from doing things they aren’t supposed to do.  In many cases, you can send the neighbor’s kids to a room with your own kids to limit the damage they can cause.  This is very much in line with the way we treat devices with BYoD settings.  We try to authenticate users to ensure they are supposed to be accessing data on our network.  We place data behind access lists that try to determine location or device type.  We use the network as the tool to limit access to data as opposed to intruding on the device.

Both BYoD and MDM are needed in a corporate environment to some degree. The key to figuring out which needs to be applied where can be boiled down to one easy question:

Who paid for your device?

If the user bought their device, you need to be exploring BYoD polices as your primary method of securing the network and enabling access.  Unless you have a very clearly defined policy in place for device access, you can’t just assume you have the right to disable half a user’s device functions and then wipe it whenever you feel the need.  Instead, you need to focus your efforts on setting up rules that they should follow and containing their access to your data with access lists and user authentication.  On the other hand, if the company paid for your tablet then MDM is the likely solution in mind.  Since the device belongs to the corporation, they are will within their rights to do what they would like with it.  Use it just like you would a corporate laptop or an issued Blackberry instead of a personal iPhone.  Don’t be shocked if it gets wiped or random features get turned off due to company policy.

Tom’s Take

When it’s time to decide how best to manage your devices, make sure to pull out all those old credit card receipts.  If you want to enable MDM on all your corporate phones and tablets, be sure to check out http://enterpriseios.com/ for a list of all the features supported in a given MDM provider for both iOS and other OSes like Android or Blackberry.  If you didn’t get the bill for that tablet, then you probably want to get in touch with your wireless or network vendor to start exploring the options available for things like 802.1X authentication or captive portal access.  In particular, I like some of the solutions available from Aerohive and Aruba’s ClearPass.  You’re going to want both MDM and BYoD policies in your environment to be sure your devices are as useful as possible while still being safe and protecting your network.  Just remember to back it all up with a very clear, detailed written use policy to ensure there aren’t any legal ramifications down the road from a wiped device or a lost phone causing a network penetration.  That’s one bill you can do without.

Presentation BINGO

At some point or another, we’ve all sat down and heard a presentation from a relatively new company.  Whether it be a startup, a stealth mode developer, or just someone trying to find their marketing legs not everyone can afford to have a PR budget like Microsoft.  At some point, all of this started sounding the same to me.  With the help of my friend Joshua Williams (@JSW_EdTech), we’ve managed to figure out why this all seems to sound like we’ve heard the same story over and over.  It’s not quite like the presentation bingo game that you may be used to.  Instead of trying to cover the card, you just need to wait for the five magic phrases or indicators.

B – Business Founders – Odds are good one of the first things a really hot startup will tell you about is how awesome the founders are.  The most impressive companies you have never heard of seem to be run by really famous people that got really bored with what they were doing for their old job and ran out and started a new company.  These folks likely used to work for Cisco or Juniper or Microsoft or even EMC.  But now they’ve got something really awesome that they want to sell you or tell you.  You will probably see this by the second slide in the Company Overview.  And the odds are really good that if the founder is one of those Cult of Personality types, you’re going to hear their name brought up a few more times in the presentation.  Usually by first name, because that shows the close-knit group dynamic that they’ve got going on.

I – I’m Unique Because… – Let’s face it.  Do we really need another storage array or switch or single pane of glass management program?  Probably not.  However, that’s what’s been built to target a segment of the market that’s really untapped at this point.  The key isn’t making the product totally awesome in every way possible.  The real key is to tell you how it’s radically different than anything you’ve seen before.  Maybe it automatically configures switch ports when load characteristics increase exponentially around holiday shopping traffic.  Maybe it can do hitless snapshots while the array is online and rebuilding.  Maybe the interface has unicorns all over the login page.  The presenter is going to hit you over the head with the fact that they are different than everyone else.  That’s why they’re going to be successful.  Never mind that the login process takes five minutes and the documentation looks like it was written by a classroom full of first graders. When a big publication does a story on us, we have something different to draw everyone in.

N – Neato Tagline – Everyone has to have a tagline.  It’s the stinger that you take away and put in the back of your mind until you’ve completely forgotten about the presentation.  Then, one morning when you’re having breakfast, the tagline comes back to you out of nowhere and you suddenly realize that this is the thing you need to fix the thing that doesn’t work!  Never mind that you can’t remember what they did or how much it costs.  That tagline was awesome!  It probably rhymes or is a pun on the state of the industry.  Maybe the it’s something the founders are fond of saying at the end of every meeting to remind people what their goals are.  Chances are it’s so cool that it will generate a few hundred thousand sales.  Then the company will hire a professional marketing firm and they’ll do market research to find a tagline that resonantes with a key demographic and everything will change and there’ll be glossy marketing slicks to go with everything.  And when that fails eventually, they’ll go back to using a modified version of the old tagline to remind everyone how they’re getting back to the core of what makes them great.

G – Gartner – You knew this one was coming.  I’m picking on Gartner here because the name fits my theme, but you know that IDC and Forrester and Tolly and others are going to come up at some point.  Despite the fact that you’ve likely never heard of them, you’re going to see that the analysts know all about this company and will have already pigeonholed them into some polygon or ranked them among the best in some esoteric category that doesn’t matter to 90% of the buying population.  It’s like being in a bank.  Everyone’s a vice president…of something.  A friend of mine was VP of communications for a bank.  His department had no employees besides himself.  What’s the point of being number one if there’s no number two or three or four?  I’m pretty sure you know how I feel about analyst firms in general by now.  Just know that the presenters are all hot to tell you about how other people tell the world that they’re awesome.  And be sure to take that information with the prescribed grain of salt.

O – Our Customers Include…(NASCAR Slide) – One of my personal favorites.  Never mind that the presenter is telling you how awesome their company/widget/idea is.  Take it from the list of companies that I’m about to show you on one (or many more) slides.  But I’m going to be clever and just show you logos, since you obviously might get FedEx confused with FedEx Cleaners in Cleveland or something.  These slides are usually a jumble of graphics that look like someone has vomited a stream of GIFs and JPEGs onto a slide.  In many ways it resembles the side of a NASCAR vehicle or jumpsuit.  In fact, all it really boils down to is an attempt to sway your opinion by saying, “Hey!  These successful people use our stuff!  You should too!”  It’s as ridiculous as McDonalds putting the logo of every company in the world on their marketing material because the employees of the company eat there on occasion.  Rather than filling your presentation with slide after slide of blather and graphic, include a testimonial from a specific company.  Or better yet, have a representative of that company come tell me how awesome your stuff is.

After you get all five of these in your presentation, you can proudly jump up and shout “BINGO!!!” and then leave.  You don’t need to know any more about the company from this point forward.  Who cares what they make?  Do you really want to know how they handle upgrades or licensing or costs?  Probably not.  You’ve already seen the important stuff.  They have awesome founders that are doing something totally unique that no one else has thought of.  They spent all their time coming up with a catchy phrase to stick in your brain and did just enough to get noticed by a few companies looking for something different to try this time around.  That, in turn, got them noticed by professionals whose job it is to tell you who you should be using and reassuring you that the products you are using are pretty cool.  After all that, you just need to write the check for whatever it is that the company is trying to sell you.  I mean, with an amazing presentation like that you shouldn’t need any more details.

New Cisco Data Center Certifications

Last week, Cisco finally plugged a huge hole in their certification offerings.  Cisco has historically required its partner community to study for specific certifications related to technologies before offering them as specialized tracks for all candidates.  It was that was for voice, wireless, and even security.  However, until last week there was no offering for data center networking.  I think this is an area in which Cisco needs to concentrate, especially when you look at their results for the first quarter of their fiscal year that were just released.  Cisco grew its data center networking business by 61% and their UCS success has vaulted them into third place in the server race easily, though some may argue they are a tight contender for second.  What Cisco needs to solidify all that growth is a program that grows data center network engineers from the ground up.

Cisco’s previous path to creating a data center network engineer involved getting a basic CCNA with no specialization and then focusing on the Data Center Networking Infrastructure certifications.  After the networking is taken care of, there is a path for UCS design and support as well.  But that requires a prospective engineer to pick up NX-OS on the fly, not having started with it in the CCNA level.  Thankfully, Cisco has now addressed that little flaw in the program.

CCNA Data Center

Cisco now has a CCNA Data Center certification that consists of non-overlapping material.  640-911Introduction to Data Center Networking DCICN is square one for new data center hopefuls.  It tests over the basics of networking much like the CCNA, but the focus is on NX-OS devices like the Nexus 7k and Nexus 5k.  It’s very much like the ICND1 exam in that is focuses on the basics and theory of general networking.  640-916 Introducing Cisco Data Center Technologies DCICT is the real meat of data center technology.  This is where the various fabric and SAN technologies are tested along with Unified Computing as well as virtualization technology like the Nexus 1000V.  Of these two tests, the DCICT is going to be the really hefty one for most candidates to chew on.  In fact, I’m almost sure that most CCNA-level engineers can go out and pass DCICN without any study beyond their CCNA knowledge.  The DCICT will likely require much more time with the study guides to get past.  Once you’ve gotten through both, you can now proudly display your CCNA: Data Center title.

CCNP Data Center

Once you’ve attained your CCNA Data Center, it’s time to delve into the topics a bit deeper.  Cisco introduced the CCNP Data Center certification track to compliment the entry level offering in the CCNA DC.  Historically, this is where the various partner-focused Data Center specializations have focused.  With the CCNP Data Center, you have to start with the Implementing Data Center Unified Computing DCUCI and Implementing Data Center Unified Fabric DCUFI exams.  Right now, you can take either version 4 or version 5 of these exams, but the version 4 exams will start expiring next year.  Once you’ve passed the implementation exams, you have a choice to make.  You can go down the path of the data center designer with Designing Cisco Data Center Unified Computing DCUCD and Designing Cisco Unifed Data Center Fabric DCUFD.  Those two exams also have a choice between version 4 and version 5, with similar expiration dates in 2013 for the version 4 exams.  If you fancy yourself more of a hands-on troubleshooter, you can opt for the Troubleshooting Cisco Unified Data Center Computing DCUCT and Troubleshooting Cisco Unified Data Center Fabric DCUFT exams.  Note that these exams don’t have a version 4 option.  There seems to have been some confusion about which exams count for what.  You must take the Implementation exams.  After that you can either take the Design exams or the Troubleshooting exams.

Tom’s Take

I’ve spent a lot of time in the last year talking about the CCIE Data Center.  One of the things that struck me about it was how focused it was in its present state on currently trained engineers.  Unless you work with Nexus and UCS every day, you won’t do well on the CCIE DC exam because there isn’t really a training program for it.  Now, with the additions of the CCNA DC and the CCNP DC, aspiring data center rock stars can get started on the road to the CCIE without needing to worry about learning IOS first.  I’m sure that Cisco will eventually retire the data center partner specializations and make the requirement for the Data Center Architecture focused around the CCNA DC and CCNP DC.  There’s no better time to jump out there and get started.  Just remember your jacket.

VMware Certification for Cisco People

During the November 14th vBrownBag, which is an excellent weekly webinar dedicated to many interesting virtualization topics, the question was raised on Twitter about mapping the VMware certification levels to their corresponding counterparts in Cisco certification.  That caught me a bit off guard at first because certification programs among the various vendors tend to be very insular and don’t compare well to other programs.  The Novell CNE isn’t the same animal as the JNCIE.  It’s not even in the same zoo.  Still, the watermark for difficult certifications is still the CCIE for most people, due to its longevity and reputation as a tough exam.  Some were wondering how it compared to the VCDX, VMware’s premier architect exam.  So I decided to take it upon myself to write up a little guide for those out there that may be Cisco certification junkies (like me) and are looking to see how their test taking skills might carry over into the nebulous world of vKernels and port groups.  Note that I’m going to focus on the data center virtualization track of the VMware certification program, as that’s the one I’ve had the most experience with and the other tracks are relatively new at this time.

VCP

The VMware Certified Professional (VCP) is most like the CCNA from Cisco.  It’s a foundational knowledge exam designed to test a candidate’s ability to understand and configure a VMware environment consisting of the ESXi hypervisor and vCenter management server.  The questions on the VCP tend to fall into the area of “Which button do you click?” and “What is the maximum number of x?” types of questions.  These are the things you will need to know when you find yourself staring at a vCenter window and you need to program a vKernel port or turn on LACP on a set of links.  Note that according to the VCP blueprint, there aren’t any of those nasty simulation questions on the VCP, unlike the CCNA.  That means you won’t have to worry about a busted Flash simulation that doesn’t support the question mark key or other crazy restrictions.  However, the VCP does have a prerequisite that I’m none too pleased about.  In order to obtain the VCP, you must attend a VMware-authorized training course.  There’s no getting around it.  Even if you take the exam and pass, you won’t get the credential until you’ve coughed up the $3000 US for the class.  That creates a ridiculous barrier to entry for many that are starting out in the virtualization industry.  It’s difficult in some cases for candidates to pony up the cost of the exam itself.  Asking them to sell a kidney in order to go to class is crazy.  For reference, that’s two CCIE lab fees.  Just for a class.  Yes, I know that existing VCPs can recertify on the new version without going to class.  But it’s a bit heavy handed to require new candidates to go to class, especially when the material that’s taught in class is readily available from work experience and the VMware website.

VCAP-DCA

The next tier of VMware certifications is the VMware Certified Advanced Professional (VCAP).  This is actually split into two different disciplines – Data Center Administration (DCA) and Data Center Design (DCD).  The VCAP-DCA is very similar to the CCIE.  Yes, I know that’s a pretty big leap from the CCNA-like VCP.  However, the structure of the exam is unlike anything but the CCIE in Ciscoland.  The VCAP-DCA is a 4-hour live practical exam.  You are configuring a set of 30-40 tasks on real servers.  You have access to the official documentation, although just like the CCIE you need to know your stuff and be able to do it quickly or you will run out of time.  Also, just like the CCIE, you are given constraints on some things, such as “Configure this task using the CLI, not the GUI.”  When you leave the secured testing facility, you won’t know your score for up to fifteen days until the exam is graded, likely by a combination of script and live person (just like the CCIE).  David M. Davis of Trainsignal is both a CCIE and a VCAP and has an excellent blog post about his VCAP experience.  He says that while the exam format of the VCAP is very similar to the CCIE, the exam contents themselves aren’t as tricky or complicated.  That makes sense when you think about the mid-range target for this exam.  This is for those people who are the best at administering VMware infrastructure.  They know more than the VCP blueprint and want to show that they are capable of troubleshooting all the wacky things that can happen to a virtual cluster.  Note that while there is a recommended training class available for the VCAP, it isn’t required to sit the test.  Also note that the VCAP is a restricted exam, meaning you must request authorization in order to sit it.  That makes sense when you consider that it’s a 4-hour test that can only be taken at a secured Pearson VUE testing center.

VCAP-DCD

The other VMware Certified Advanced Professional (VCAP) exam is the Data Center Design (DCD) exam.  This is where the line starts to blur between people that spend their time plugging away and configurations and people that spend their time in Visio putting data centers together.  Rather than focusing on purely practical tasks like the VCAP-DCA, the VCAP-DCD instead tests the candidate’s ability to design VMware-focused data centers based on a set of conditions.  The exam consists of a grouping of multiple choice, fill-in-the-blank, and in-exam design sessions.  The latter appears to have some Visio-like design components according to those that have taken the test.  This would put the exam firmly in the territory of the CCDP or even the CCDE.  The material on the DCD may be focused on design specifically, but the exam format seems to speak more to the kind of advanced questions you might see in the higher level Cisco design exams.  Just like the DCA, there are recommended courses for the DCD (like the VMware Design Workshop), but these are not requirements.  You will receive your score as soon as you leave, since there aren’t enough live configuration items on the exam to warrant a live person grading your exam.

VCDX

The current king of the mountain for VMware certifications is the VMware Certified Design Expert (VCDX).  This the VMware’s premier architecture certification.  It’s also one of the most rigorous.  A lot of people compare this to the CCIE as the showcase cert for a given industry, but based on what I’ve seen the two certifications only mirror each other in number of attempts per candidate.  The VCDX is actually more akin to the Cisco Certified Architect (CCAr) or Microsoft Certified Master certification.  That’s because rather than have a lab of gear to configure, you have to create a total solution around a given problem and demonstrate your knowledge to a council of people live and in person.  It’s not a inexpensive, either in terms of time or cost.  You have to pay a $300 fee to even have your application submitted.  This is pretty similar to the CCIE written exam.  However, even if you submit the proposal, there’s no guarantee you’ll make it to the defense.  Your application has to be scrutinized and there has to be a reasonable chance of you defending it.  If you’re submission isn’t up to snuff, you get recycled to the back of the pile with a pat on the head and a “try again later” note.  If you do make the cut, you have to fly out to a pre-determined location to defend.  Unlike Cisco’s policy of having a lab in many different locations all over the world, the defense locations tend to move around.  You may defend at VMWorld in San Francisco and have to try again in Brussels or even Tokyo.  It all really depends on timing.  Once you get in the room for your defense, you have to present your proposal to the council as well as field questions about it.  You’ll probably have to end up whiteboarding at some point to prove you know what you’re talking about.  And this council doesn’t accept simple answers.  If they ask you why you did something, you’d better have a good answer.  And “Because it’s best practice” doesn’t cut it either.  You need to show an in-depth knowledge of all facets of not only the VMware pieces of the solution, but third party pieces as well.  You need to think about all the things that you would put into a successful implementation, from environmental impacts to fault tolerance. Implementation plans and training schedules could also come up.  The idea is that you are working your way through a complete solution that shows you are a true architect, not just a mouse-clicker in the trenches.  That’s why I tend to look at the VCDX as above the CCIE.  It’s more about strategic thinking instead of brilliant tactical maneuvers.  Read up on my CCAr post from earlier this year to get an idea of what Cisco’s looking for in their architects.  That’s what VMware is looking for too.


That’s VMware certification in a nutshell.  It doesn’t map one-to-one to the existing Cisco certification lineup, but I would argue that’s due more to the VMware emphasis on practical experience versus book learning.  Even the VCAP-DCD, which would appear to be a best practices exam, has a component of live drag-and-drop design in a simlet.  I would argue that if Cisco had to do it all over again, their certification program would look a lot like the VMware version.  I talked earlier this year about wanting to do the VCAP in some form this year.  I don’t think I’m going to get there.  But knowing what I know now about the program and where I need to focus my studies based on what I’m doing today, I think that the VCAP is a very realistic goal for 2013.  The VCDX may be a bit out of my league for the time being, but who knows?  I said the same thing about the CCIE many years ago.

Cisco To Buy Meraki?

If you’re in the tech industry, it never seems like there’s any downtime. That was the case today all thanks to my friend Greg Ferro (@etherealmind). I was having breakfast when this suddenly scrolled up on my Twitter feed:

https://twitter.com/etherealmind/status/270138919234977792

After I finished spitting out my coffee, I started searching for confirmation or indication to the contrary. Stephen Foskett (@SFoskett) provided it a few minutes later by finding the following link:

http://blogs.cisco.com/news/cisco-announces-intent-to-acquire-meraki/

EDIT: As noted in the comments below, Brandon Bennett (@brandonrbennett) found a copy of the page in Google’s Webcache. The company in the linked page says “Madras”, but the rest of the info is all about Meraki. I’m thinking Madras is just a placeholder.

For the moment, I’m going to assume that this is a legitimate link that is really going to point to something soon. I’m not going to assume Cisco has a habit of creating “Cisco announces intent to acquire X Company” pages out of habit, like this famous Dana Carvey SNL video. In that case, the biggest question now becomes…

Why Meraki?

I’ll admit, I was shaking my head for a bit on this one. Cisco doesn’t buy companies because of hardware technology. They’ve got R&D labs that can replicate pretty much anything under the sun given enough time. Cisco instead usually purchases for innovative software platforms. They originally bought Airespace for the controller architecture and managment software that originally became WCS. The silicon isn’t as important, since Cisco makes their own.

Meraki doesn’t really make anything innovative from a hardware front. Their APs use reference architecture. Their switch and firewall offerings are also pretty standard fare with basic 10/100/1000 connectivity and are likely based on Broadcom reference designs as well. What exactly draws in a large buyer like Cisco? What is unique among all those products?

Cisco’s Got Its Head In The Clouds

The single thing that is similar across the whole Meraki line is the software. I talked a bit about it in my Wireless Field Day 2 post on Meraki. Their single management platform allows them to manage switches, firewalls, and wireless in one single application. You can see all the critical information that your switches are pumping out and program them accordingly. The demo I saw at WFD2 was isolating a hungry user downloading too much data with a combination of user identification and pushing an ACL down to that user limiting their bandwidth for certain kinds of traffic without totally locking that person out of the network. That’s the kind of thing that Cisco is looking for.

With the announcement of onePK, Cisco really wants to show off what they can do when they start plugging APIs into their switches and routers. But simply opening an API doesn’t do anything. You’ve got to have some kind of software program to collect data from the API and then push instructions back down to it to accomplish a goal. And if you can decentralize that control to somewhere in the cloud, you’ve got a recipe for the marketing people to salivate over. For now, I thought that would be some kind of application borne out of the Cisco Prime family.

If the Meraki acquisition comes to fruition, Meraki’s platform will likely be rebranded as a member of the Cisco Prime family and used for this purpose. It will likely be positioned initially towards the SMB and medium enterprise customers. In fact, I’ve got three or four use cases for this management software on Cisco hardware today with my customers. This would do a great job of replacing some of the terrible management platforms I’ve seen in the past, like Cisco Configuration Assisstant (CCA) and the unmentioned product Cisco was pitching as a hands-off way to manage sub 50-node networks. By allowing the Meraki management software to capture data from Cisco devices, you can have a proven portal to manage your switches and APs. Add in the ability to manage other SMB devices, such as a UC 500 or a small 800-series router and you’ve got a smooth package you can sell to your customers for a yearly fee. Ah ha! Recurring, cloud based income! That’s just icing on the cake.

EDIT: 6:48 CST – Confirmed by a Cisco press release and as well by Techcrunch and CRN.


Tom’s Take

Ruckus just had their IPO. It was time for a shake up in the upstart wireless market. Meraki was the target that most people had in mind. I’d been asked by several traditional networking vendors recently who I thought was going to be the next wireless company to be acquired, and every time my money landed on Meraki. They have a good software platform that helps them manage inexpensive devices. All their engineering goes into the software. By moving away from pure wireless products, they’ve raised their profile with their competitors. I never seriously expected Meraki to dethrone Cisco or Brocade with their switch offerings. Instead, I saw the Meraki switches and firewalls as an add-on offering to compliment their wireless deployments. You could have a whole small office running Meraki wireless, wired, and security deployments. Getting the ability to manage all those devices easily from one web-based application must have appealed to someone at Cisco M&A. I remember from my last visit to the Meraki offices that their name is an untranslatable word from Greek that means “to do something with intense passion.” It also can mean “to have a place at the table.” It does appear that Meraki found a place at a very big table indeed.

My First VMUG

If you’re a person that is using VMware or interested in starting, you should be a member of the VMware User Group (VMUG).  This organization is focused on providing a local group that talks about all manner of virtualization-related topics.  It can be a learning resource for you to pick up new techniques or technologies.  It can also serve as a sounding board for those that want to discuss in-depth design challenges or project ideas.  The various regional VMUGs have quite a following, with many quarterly meetings encompassing a full day of breakout sessions and keynote addresses.

I signed up for the Oklahoma City VMUG about six months ago shortly after confirmation that I had been selected as vExpert for 2012.  I wanted to gauge interest in VMware locally and hopefully get some ideas about where people were taking it outside my own experiences.  I work mostly with primary education institutions in my day job, and many of them are just now starting to realize the advantages of virtualizing their systems.  In fact, my previous virtualization primer was directed at this group of individuals.  However, I know there are many more organizations that are making effective use of this technology and I hoped that many of them would be involved in the VMUG.

What I found after I joined was a bit disjointed.  There didn’t seem to be a lot of activity on the discussion boards.  I couldn’t really find the leadership group that was in charge of meetings and such.  As it turned out, there hadn’t even been a VMUG meeting for almost two years.  There were a lot of people that wanted to be involved in some capacity, but no real direction.  Thankfully, that changed at VMWorld this year thanks to Joey Ware.  Joey is an admin at the University of Oklahoma Heath Sciences Center.  He jumped in the driver’s seat and started planning a new meeting to allow everyone to circle back up and catch up with what had been going on recently.

When I arrived at the meeting on Nov. 12th, I wasn’t really sure what to expect.  I know that organizations like the New England VMUG and the UK VMUG are rather large.  I didn’t know if the OKC VMUG was going to attract a crowd or a basketball team.  Imagine my surprise when there were upwards of 50 people in the room!  There were university administrators, energy company architects, and corporate developers.  There were VMware employees and even an EMC vSpecialist.  After a welcome back introduction, we got a nice overview of the new things in vSphere 5.1.  Much of this was review for me, having been tuned in during the launch at VMWorld this year and reading great blog articles released thereafter (check out the massive archive here courtesy of Eric Seibert).  It was great to see so many people looking at moving to vSphere 5.1.  Of course, I couldn’t let the whole briefing go without injecting a bit of commentary about one of my least-liked features, VMware Storage Appliance (VSA).  VSA, to me at least, is a half-baked idea designed to give cost conscious customers access to advanced VMware features without buying a SAN or even take the time to roll their own NAS from a Linux distro.  It really feels like something someone threw together right before a code freeze deadline and got it on the checklist of Cool Things You Can Do In vSphere.  If you are at all seriously considering using VSA, save your time and money and just buy a SAN.  Now, during the VMUG session, there were several people that mentioned that VSA does have a place, but purely as a last ditch option.  I’d tend to agree with this assessment, but again save your resources and get something useful.

We got a good discussion about vCenter Operations Manager (vCOps) from Sean O’Dell (@CloudyChance).  VMware is really pusing vCOps in 5.1 as a way to increase your productivity and reduce the chance for human error in your configuration.  They are really trying to push it by making the Foundation edition free in vSphere 5.1.  The Foundation edition helps you get started with some of the alert capabilities and health monitoring pieces that many admins would find useful.  Once you find that you like what vCOps is telling you and you want to start using the more advanced features to start managing your environment, you’re ready to move up to the Standard edition, which does cost around $125/VM in packs of 25.  If you’re managing that many VMs today without some kind of automation, you should really look at investing in vCOps.  I promise that it’s going to end up saving you more than 25 hours worth of work over the course of a year, which will more than pay for itself in the long run.


Tom’s Take

My first VMUG was well worth it.  I was really happy that there were that many people in my area that want to learn more about VMware and want to talk to people that work with it.  Just when I think that I’m the only one trying to do awesome things with virtualization, my peers go out and show me that I don’t really live in a vacuum.  I really hope that Joey can keep the OKC VMUG going far into the future and keep spreading the word about virtualization to anyone that will listen.  Who knows?  Maybe I’ll get brave enough to give a presentation sometime soon.

If you are interested in joining your local VMUG, head over to http://www.vmug.com/l/pw/rs and sign up.  It’s totally free and open to anyone.  For those reading my post that are in the Oklahoma City area, the link to the OKC VMUG workspace is here.  We’re going to try to have quarterly meetings, so I look forward to seeing more new faces after the first of the year.

My Virtualization Primer

When I gave my cloud presentation earlier this year, I did indeed have about 10% of my audience walk out on my presentation by the end. I couldn’t really figure out why either. I thought that an overview of the cloud was a great topic to bring up among people that might not otherwise know much about it. Through repeated viewings of my presentation, I think I realize when I lost most everyone. I should have stopped after my cloud section and spent the rest of the time clarifying everything. Instead, I barrelled through the next section on virtualization with wild abandon, as if I was giving this presentation to a group of people that were already doing it. Instead, I should have split the two and focused on presenting virtualization in its own session.

When I got the chance to present again at the fall edition of this conference, I jumped at the chance. Here was my opportunity to erase my mistake and spend more time on the “how” of things. Coupled with my selection as a vExpert, I figured it was about time for me to evangelize all the great things about virtualization. If you are at all familiar with virtualization, this is going to be a pretty boring presentation to watch. Here’s a link to my slide deck (PDF Warning):

Here’s the video to go along with it:

Not my worst presentation. I felt it came off rather conversationally this time instead of a lecture. We did have some good discussion before the video started rolling that I wish I had captured. One of the things that really took me by surprise was the lack of questions. I don’t know if that’s because people are just being generally polite or if they’re worried about the quality of their question. I’m used to being in presentations at Tech Field Day where the delegates aren’t afraid to voice their opinions about things. I’m beginning to wonder if that is the exception to the rule. Even at other presentations that I’ve been to locally, the audience seems to be on the quiet side for the most part. I’ve even considered doing a TFD-style presentation of about two or three slides and the rest becomes a big discussion. I know I’d get a lot out of that, but I’m not sure my audience would appreciate it as much.

I’ve also noticed that I do need to start being careful when I’m in other presentations. In one that I attended two days after this video was made, I had to strongly resist the urge to correct a presenter on something. An audience member asked a question about BYOD security posture and classification and the answer that was received wasn’t what I would have wanted to get. I decided that discretion was the better part of valor and kept my mouth shut. What about you? If the presenter is saying something totally wrong or has missed the point entirely, would you say something?

Tom’s Take

In the end, most of it comes down to practice. When you assemble your slide deck and practice it a couple of times, you should feel good about the material. Don’t be one of those presenters that gets caught off guard by your own slide transitions. Don’t laugh, it happened in a different presentation. For me, the key going forward is going to be to reduce the slides and spend more time on the conversation. I’ve already decided that my content for 2013 is going to focus around IPv6. People have been coming to me asking about my original IPv6 presentation from 2011, and due to the final exhaustion of IPv4 from RIPE and ARIN, I think it’s time to revisit that one with a focus on real-world experience. That does mean that I’m going to have a lot of my plate in the next few months, but when I am done I’m going to have a lot of good anecdotes to tell.