New Wrinkles in the Fabric – Cisco Nexus Updates

There’s no denying that The Cloud is an omnipresent fixture in our modern technological lives.  If we aren’t already talking about moving things there, we’re wondering why it’s crashed.  I don’t have any answers about these kinds of things, but thankfully the people at Cisco have been trying to find them.  They let me join in on a briefing about the announcements that were made today regarding some new additions to their data center switching portfolio more commonly known by the Nexus moniker.

Nexus 6000

The first of the announcements is around a new switch family, the Nexus 6000.  The 6000 is more akin to the 5000 series than the 7000, containing a set of fixed-configuration switches with some modularity.  The Nexus 6001 is the true fixed-config member of the lot.  It’s a 1U 48-port 10GbE switch with 4 40GbE uplinks.  If that’s not enough to get your engines revving, you can look at the bigger brother, the Nexus 6004.  This bad boy is a 4U switch with a fixed config of 48 40GbE ports and 4 expansion modules that can double the total count up to 96 40GbE ports.  That’s a lot of packets flying across the wire.  According to Cisco, those packets can fly at a 1 microsecond latency port-to-port.  The Nexus 6000 is also an Fibre Channel over Ethernet (FCoE) switch, as all Nexus switches are.  This one is a 40GbE-capable FCoE switch.  However, as there are no 40GbE targets available in FCoE right now, it’s going to be on an island until those get developed.  A bit of future proofing, if you will.  The Nexus 6000 also support FabricPath, Cisco’s TRILL-based fabric technology, along with a large number of multicast entries in the forwarding table.  This is no doubt to support VXLAN and OTV in the immediate future for layer 2 data center interconnect.

The Nexus line also gets a few little added extras.  There is going to be a new FEX, the 2248PQ, that features 10GbE downlink ports and 40GbE uplink ports.  There’s also going to be a 40GbE expansion module for the 5500 soon, so your DC backbone should be able to run a 40GbE with a little investment.  Also of interest is the new service module  for the Nexus 7000.  That’s right, a real service module.  The NAM-NX1 is a Network Analysis Module (NAM) for the Nexus line of switches.  This will allow spanned traffic to be pumped though for analysis of traffic composition and characteristics without taking a huge hit to performance.  We’ve all known that the 7000 was going to be getting service modules for a while.  This is the first of many to roll off the line.  In keeping with Cisco’s new software strategy, the NAM also has a virtual cousin, not surprising named the vNAM.  This version lives entirely in software and is designed to serve the same function that its hardware cousin does only in the land of virtual network switches.  Now that the Nexus line has service modules, kind of makes you wonder what the Catalyst 6500 has all to itself now?  We know that the Cat6k is going to be supported in the near term, but is it going to be used as a campus aggregation or core?  Maybe as a service module platform until the SMs can be ported to the Nexus?  Or maybe with the announcement of FabricPath support for the Cat6k this venerable switch will serve as a campus/DC demarcation point?  At this point the future of Cisco’s franchise switch is really anyone’s guess.

Nexus 1000v InterCloud

The next major announcement from Cisco is the Nexus 1000v InterCloud.  This is very similar to what VMware is doing with their stretched data center concept in vSphere 5.1.  The 1000v InterCloud (1kvIC) builds a secure layer 2 GRE tunnel between your private could and a provider’s public could.  You can now use this tunnel to migrate workloads back and forth between public and private server space.  This opens up a whole new area of interesting possibilities, not the least of which is the Cloud Services Router (CSR).  When I first heard about the CSR last year at Cisco Live, I thought it was a neat idea but had some shortcomings.  The need to be deployed to a place where it was visible to all your traffic was the most worrisome.  Now, with the 1kvIC, you can build a tunnel between yourself and a provider and use CSR to route traffic to the most efficient or cost effective location.  It’s also a very compelling argument for disaster recovery and business continuity applications.  If you’ve got a category 4 hurricane bearing down on your data center, the ability to flip a switch and cold migrate all your workloads to a safe, secure vault across the country is a big sigh of relief.

The 1kvIC also has its own management console, the vNMC.  Yes, I know there’s already a vNMC available from Cisco.  The 1kvIC version is a bit special thought.  It not only gives you control over your side of the interconnect, but it also integrates with the provider’s management console as well.  This gives you much more visibility into what’s going on inside the provider instances beyond what we already have from simple dashboards or status screens on public web pages.  This is a great help when you think about the kinds of things you would be doing with intercloud mobility.  You don’t want to send your workloads to the provider if an engineer has started an upgrade on their core switches on a Friday night.  When it comes to the cloud, visibility is viability.

CiscoONE

In case you haven’t heard, Cisco wants to become a software company.  Not a bad idea when hardware is becoming a commodity and software is the home of the high margins.  Most of the development that Cisco has been doing along the software front comes from the Open Network Environment (ONE) initiative.  In today’s announcement, CiscoONE will now be the home for an OpenFlow controller.  In this first release, Cisco will be supporting OpenFlow and their own OnePK API extensions on the southbound side.  On the northbound side of things, the CiscoONE Controller will expose REST and Java hooks to allow interaction with flows passing though the controller.  While that’s all well and good for most of the enterprise devs out there, I know a lot of homegrown network admins that hack together their own scripts through Perl and Python.  For those of you that want support for your particular flavor of language built into CiscoONE, I highly recommend getting to their website and telling them what you want.  They are looking at adding additional hooks as time goes on, so you can get in on the ground floor now.

Cisco is also announcing OnePK support for the ISR G2 router platform and the ASR 1000 platform.  There will be OpenFlow support on the Nexus 3000 sometime in the near future, along with support in the Nexus 1000v for Microsoft Hyper-V and KVM.  And somewhere down the line, Cisco will have a VXLAN gateway for all the magical unicorn packet goodness across data centers that stretch via non-1kvIC links.


Tom’s Take

The data center is where the dollars are right now.  I’ve heard people complain that Cisco is leaving the enterprise campus behind as they charge forward into the raised floor garden of the data center.  These are the people driving the data that produces the profits that buy more equipment.  Whether it be massive Hadoop clusters or massive private cloud projects, the accounting department has given the DC team a blank checkbook today.  Cisco is doing its best to drive some of those dollars their way by providing new and improved offerings like the Nexus 6000.  For those that don’t have a huge investment in the Nexus 7000, the 6000 makes a lot of sense as both a high speed core aggregation switch or an end-of-row solution for a herd of FEXen.  The Nexus 1000v InterCloud is competing against VMware’s stretched data center concept in much the same way that the 1000v itself competes against the standard VMware vSwitch.  WIth Nicira in the driver’s seat of VMware’s networking from here on out, I wouldn’t be shocked to see more solutions that come from Cisco that mirror or augment VMware solutions as a way to show VMware that Cisco can come up with alternatives just as well as anyone else.

VMware Certification for Cisco People

During the November 14th vBrownBag, which is an excellent weekly webinar dedicated to many interesting virtualization topics, the question was raised on Twitter about mapping the VMware certification levels to their corresponding counterparts in Cisco certification.  That caught me a bit off guard at first because certification programs among the various vendors tend to be very insular and don’t compare well to other programs.  The Novell CNE isn’t the same animal as the JNCIE.  It’s not even in the same zoo.  Still, the watermark for difficult certifications is still the CCIE for most people, due to its longevity and reputation as a tough exam.  Some were wondering how it compared to the VCDX, VMware’s premier architect exam.  So I decided to take it upon myself to write up a little guide for those out there that may be Cisco certification junkies (like me) and are looking to see how their test taking skills might carry over into the nebulous world of vKernels and port groups.  Note that I’m going to focus on the data center virtualization track of the VMware certification program, as that’s the one I’ve had the most experience with and the other tracks are relatively new at this time.

VCP

The VMware Certified Professional (VCP) is most like the CCNA from Cisco.  It’s a foundational knowledge exam designed to test a candidate’s ability to understand and configure a VMware environment consisting of the ESXi hypervisor and vCenter management server.  The questions on the VCP tend to fall into the area of “Which button do you click?” and “What is the maximum number of x?” types of questions.  These are the things you will need to know when you find yourself staring at a vCenter window and you need to program a vKernel port or turn on LACP on a set of links.  Note that according to the VCP blueprint, there aren’t any of those nasty simulation questions on the VCP, unlike the CCNA.  That means you won’t have to worry about a busted Flash simulation that doesn’t support the question mark key or other crazy restrictions.  However, the VCP does have a prerequisite that I’m none too pleased about.  In order to obtain the VCP, you must attend a VMware-authorized training course.  There’s no getting around it.  Even if you take the exam and pass, you won’t get the credential until you’ve coughed up the $3000 US for the class.  That creates a ridiculous barrier to entry for many that are starting out in the virtualization industry.  It’s difficult in some cases for candidates to pony up the cost of the exam itself.  Asking them to sell a kidney in order to go to class is crazy.  For reference, that’s two CCIE lab fees.  Just for a class.  Yes, I know that existing VCPs can recertify on the new version without going to class.  But it’s a bit heavy handed to require new candidates to go to class, especially when the material that’s taught in class is readily available from work experience and the VMware website.

VCAP-DCA

The next tier of VMware certifications is the VMware Certified Advanced Professional (VCAP).  This is actually split into two different disciplines – Data Center Administration (DCA) and Data Center Design (DCD).  The VCAP-DCA is very similar to the CCIE.  Yes, I know that’s a pretty big leap from the CCNA-like VCP.  However, the structure of the exam is unlike anything but the CCIE in Ciscoland.  The VCAP-DCA is a 4-hour live practical exam.  You are configuring a set of 30-40 tasks on real servers.  You have access to the official documentation, although just like the CCIE you need to know your stuff and be able to do it quickly or you will run out of time.  Also, just like the CCIE, you are given constraints on some things, such as “Configure this task using the CLI, not the GUI.”  When you leave the secured testing facility, you won’t know your score for up to fifteen days until the exam is graded, likely by a combination of script and live person (just like the CCIE).  David M. Davis of Trainsignal is both a CCIE and a VCAP and has an excellent blog post about his VCAP experience.  He says that while the exam format of the VCAP is very similar to the CCIE, the exam contents themselves aren’t as tricky or complicated.  That makes sense when you think about the mid-range target for this exam.  This is for those people who are the best at administering VMware infrastructure.  They know more than the VCP blueprint and want to show that they are capable of troubleshooting all the wacky things that can happen to a virtual cluster.  Note that while there is a recommended training class available for the VCAP, it isn’t required to sit the test.  Also note that the VCAP is a restricted exam, meaning you must request authorization in order to sit it.  That makes sense when you consider that it’s a 4-hour test that can only be taken at a secured Pearson VUE testing center.

VCAP-DCD

The other VMware Certified Advanced Professional (VCAP) exam is the Data Center Design (DCD) exam.  This is where the line starts to blur between people that spend their time plugging away and configurations and people that spend their time in Visio putting data centers together.  Rather than focusing on purely practical tasks like the VCAP-DCA, the VCAP-DCD instead tests the candidate’s ability to design VMware-focused data centers based on a set of conditions.  The exam consists of a grouping of multiple choice, fill-in-the-blank, and in-exam design sessions.  The latter appears to have some Visio-like design components according to those that have taken the test.  This would put the exam firmly in the territory of the CCDP or even the CCDE.  The material on the DCD may be focused on design specifically, but the exam format seems to speak more to the kind of advanced questions you might see in the higher level Cisco design exams.  Just like the DCA, there are recommended courses for the DCD (like the VMware Design Workshop), but these are not requirements.  You will receive your score as soon as you leave, since there aren’t enough live configuration items on the exam to warrant a live person grading your exam.

VCDX

The current king of the mountain for VMware certifications is the VMware Certified Design Expert (VCDX).  This the VMware’s premier architecture certification.  It’s also one of the most rigorous.  A lot of people compare this to the CCIE as the showcase cert for a given industry, but based on what I’ve seen the two certifications only mirror each other in number of attempts per candidate.  The VCDX is actually more akin to the Cisco Certified Architect (CCAr) or Microsoft Certified Master certification.  That’s because rather than have a lab of gear to configure, you have to create a total solution around a given problem and demonstrate your knowledge to a council of people live and in person.  It’s not a inexpensive, either in terms of time or cost.  You have to pay a $300 fee to even have your application submitted.  This is pretty similar to the CCIE written exam.  However, even if you submit the proposal, there’s no guarantee you’ll make it to the defense.  Your application has to be scrutinized and there has to be a reasonable chance of you defending it.  If you’re submission isn’t up to snuff, you get recycled to the back of the pile with a pat on the head and a “try again later” note.  If you do make the cut, you have to fly out to a pre-determined location to defend.  Unlike Cisco’s policy of having a lab in many different locations all over the world, the defense locations tend to move around.  You may defend at VMWorld in San Francisco and have to try again in Brussels or even Tokyo.  It all really depends on timing.  Once you get in the room for your defense, you have to present your proposal to the council as well as field questions about it.  You’ll probably have to end up whiteboarding at some point to prove you know what you’re talking about.  And this council doesn’t accept simple answers.  If they ask you why you did something, you’d better have a good answer.  And “Because it’s best practice” doesn’t cut it either.  You need to show an in-depth knowledge of all facets of not only the VMware pieces of the solution, but third party pieces as well.  You need to think about all the things that you would put into a successful implementation, from environmental impacts to fault tolerance. Implementation plans and training schedules could also come up.  The idea is that you are working your way through a complete solution that shows you are a true architect, not just a mouse-clicker in the trenches.  That’s why I tend to look at the VCDX as above the CCIE.  It’s more about strategic thinking instead of brilliant tactical maneuvers.  Read up on my CCAr post from earlier this year to get an idea of what Cisco’s looking for in their architects.  That’s what VMware is looking for too.


That’s VMware certification in a nutshell.  It doesn’t map one-to-one to the existing Cisco certification lineup, but I would argue that’s due more to the VMware emphasis on practical experience versus book learning.  Even the VCAP-DCD, which would appear to be a best practices exam, has a component of live drag-and-drop design in a simlet.  I would argue that if Cisco had to do it all over again, their certification program would look a lot like the VMware version.  I talked earlier this year about wanting to do the VCAP in some form this year.  I don’t think I’m going to get there.  But knowing what I know now about the program and where I need to focus my studies based on what I’m doing today, I think that the VCAP is a very realistic goal for 2013.  The VCDX may be a bit out of my league for the time being, but who knows?  I said the same thing about the CCIE many years ago.

My First VMUG

If you’re a person that is using VMware or interested in starting, you should be a member of the VMware User Group (VMUG).  This organization is focused on providing a local group that talks about all manner of virtualization-related topics.  It can be a learning resource for you to pick up new techniques or technologies.  It can also serve as a sounding board for those that want to discuss in-depth design challenges or project ideas.  The various regional VMUGs have quite a following, with many quarterly meetings encompassing a full day of breakout sessions and keynote addresses.

I signed up for the Oklahoma City VMUG about six months ago shortly after confirmation that I had been selected as vExpert for 2012.  I wanted to gauge interest in VMware locally and hopefully get some ideas about where people were taking it outside my own experiences.  I work mostly with primary education institutions in my day job, and many of them are just now starting to realize the advantages of virtualizing their systems.  In fact, my previous virtualization primer was directed at this group of individuals.  However, I know there are many more organizations that are making effective use of this technology and I hoped that many of them would be involved in the VMUG.

What I found after I joined was a bit disjointed.  There didn’t seem to be a lot of activity on the discussion boards.  I couldn’t really find the leadership group that was in charge of meetings and such.  As it turned out, there hadn’t even been a VMUG meeting for almost two years.  There were a lot of people that wanted to be involved in some capacity, but no real direction.  Thankfully, that changed at VMWorld this year thanks to Joey Ware.  Joey is an admin at the University of Oklahoma Heath Sciences Center.  He jumped in the driver’s seat and started planning a new meeting to allow everyone to circle back up and catch up with what had been going on recently.

When I arrived at the meeting on Nov. 12th, I wasn’t really sure what to expect.  I know that organizations like the New England VMUG and the UK VMUG are rather large.  I didn’t know if the OKC VMUG was going to attract a crowd or a basketball team.  Imagine my surprise when there were upwards of 50 people in the room!  There were university administrators, energy company architects, and corporate developers.  There were VMware employees and even an EMC vSpecialist.  After a welcome back introduction, we got a nice overview of the new things in vSphere 5.1.  Much of this was review for me, having been tuned in during the launch at VMWorld this year and reading great blog articles released thereafter (check out the massive archive here courtesy of Eric Seibert).  It was great to see so many people looking at moving to vSphere 5.1.  Of course, I couldn’t let the whole briefing go without injecting a bit of commentary about one of my least-liked features, VMware Storage Appliance (VSA).  VSA, to me at least, is a half-baked idea designed to give cost conscious customers access to advanced VMware features without buying a SAN or even take the time to roll their own NAS from a Linux distro.  It really feels like something someone threw together right before a code freeze deadline and got it on the checklist of Cool Things You Can Do In vSphere.  If you are at all seriously considering using VSA, save your time and money and just buy a SAN.  Now, during the VMUG session, there were several people that mentioned that VSA does have a place, but purely as a last ditch option.  I’d tend to agree with this assessment, but again save your resources and get something useful.

We got a good discussion about vCenter Operations Manager (vCOps) from Sean O’Dell (@CloudyChance).  VMware is really pusing vCOps in 5.1 as a way to increase your productivity and reduce the chance for human error in your configuration.  They are really trying to push it by making the Foundation edition free in vSphere 5.1.  The Foundation edition helps you get started with some of the alert capabilities and health monitoring pieces that many admins would find useful.  Once you find that you like what vCOps is telling you and you want to start using the more advanced features to start managing your environment, you’re ready to move up to the Standard edition, which does cost around $125/VM in packs of 25.  If you’re managing that many VMs today without some kind of automation, you should really look at investing in vCOps.  I promise that it’s going to end up saving you more than 25 hours worth of work over the course of a year, which will more than pay for itself in the long run.


Tom’s Take

My first VMUG was well worth it.  I was really happy that there were that many people in my area that want to learn more about VMware and want to talk to people that work with it.  Just when I think that I’m the only one trying to do awesome things with virtualization, my peers go out and show me that I don’t really live in a vacuum.  I really hope that Joey can keep the OKC VMUG going far into the future and keep spreading the word about virtualization to anyone that will listen.  Who knows?  Maybe I’ll get brave enough to give a presentation sometime soon.

If you are interested in joining your local VMUG, head over to http://www.vmug.com/l/pw/rs and sign up.  It’s totally free and open to anyone.  For those reading my post that are in the Oklahoma City area, the link to the OKC VMUG workspace is here.  We’re going to try to have quarterly meetings, so I look forward to seeing more new faces after the first of the year.

My Virtualization Primer

When I gave my cloud presentation earlier this year, I did indeed have about 10% of my audience walk out on my presentation by the end. I couldn’t really figure out why either. I thought that an overview of the cloud was a great topic to bring up among people that might not otherwise know much about it. Through repeated viewings of my presentation, I think I realize when I lost most everyone. I should have stopped after my cloud section and spent the rest of the time clarifying everything. Instead, I barrelled through the next section on virtualization with wild abandon, as if I was giving this presentation to a group of people that were already doing it. Instead, I should have split the two and focused on presenting virtualization in its own session.

When I got the chance to present again at the fall edition of this conference, I jumped at the chance. Here was my opportunity to erase my mistake and spend more time on the “how” of things. Coupled with my selection as a vExpert, I figured it was about time for me to evangelize all the great things about virtualization. If you are at all familiar with virtualization, this is going to be a pretty boring presentation to watch. Here’s a link to my slide deck (PDF Warning):

Here’s the video to go along with it:

Not my worst presentation. I felt it came off rather conversationally this time instead of a lecture. We did have some good discussion before the video started rolling that I wish I had captured. One of the things that really took me by surprise was the lack of questions. I don’t know if that’s because people are just being generally polite or if they’re worried about the quality of their question. I’m used to being in presentations at Tech Field Day where the delegates aren’t afraid to voice their opinions about things. I’m beginning to wonder if that is the exception to the rule. Even at other presentations that I’ve been to locally, the audience seems to be on the quiet side for the most part. I’ve even considered doing a TFD-style presentation of about two or three slides and the rest becomes a big discussion. I know I’d get a lot out of that, but I’m not sure my audience would appreciate it as much.

I’ve also noticed that I do need to start being careful when I’m in other presentations. In one that I attended two days after this video was made, I had to strongly resist the urge to correct a presenter on something. An audience member asked a question about BYOD security posture and classification and the answer that was received wasn’t what I would have wanted to get. I decided that discretion was the better part of valor and kept my mouth shut. What about you? If the presenter is saying something totally wrong or has missed the point entirely, would you say something?

Tom’s Take

In the end, most of it comes down to practice. When you assemble your slide deck and practice it a couple of times, you should feel good about the material. Don’t be one of those presenters that gets caught off guard by your own slide transitions. Don’t laugh, it happened in a different presentation. For me, the key going forward is going to be to reduce the slides and spend more time on the conversation. I’ve already decided that my content for 2013 is going to focus around IPv6. People have been coming to me asking about my original IPv6 presentation from 2011, and due to the final exhaustion of IPv4 from RIPE and ARIN, I think it’s time to revisit that one with a focus on real-world experience. That does mean that I’m going to have a lot of my plate in the next few months, but when I am done I’m going to have a lot of good anecdotes to tell.

SDN and the IT Toolbox

There’s been a *lot* of talk about software-defined networking (SDN) being the next great thing to change networking. Article after article has been coming out recently talking about how things like the VMware acquistion of Nicira is going to put network engineers out of work. To anyone that’s been around networking for a while, this isn’t much different than the talk that’s been coming out about any one of a number of different technologies over the last decade.

I’m an IT nerd. I can work on a computer with my eyes closed. However, not everything in my house is a computer. Sometimes I have work on other things, like hanging mini-blinds or fixing a closet door. For those cases, I have to rely on my toobox. I’ve been building it up over the years to include all the things one might need to do odd jobs around the house. I have a hammer and a big set of screwdrivers. I have sockets and wrenches. I have drills and tape measures. The funny thing about these tools is the “new tool mentality”. Every time I get a new tool, I think of all the new things that I can do with it. When I first got my power drill, I was drilling holes in everything. I hung blinds with ease. I moved door knobs. I looked for anything and everything I could find to use my drill for. The problem with that mentality is that after a while, you find that your new tool can’t be used for every little job. I can’t drive a nail with a drill. I can’t measure a board with a drill. In fact, besides drilling holes and driving screws, drills aren’t good for a whole lot of work. With experience, you learn that a drill is a great tool for a range of uses.

This same type of “new tool mentality” is pervasive in IT as well. Once we develop a new tool for a purpose, we tend to use that tool to solve almost every problem. In my time in IT, I have seen protocols being used to solve every imaginable problem. Remember ATM? How about LANE? If we can make everything ATM, we can solve every problem. How about QoS? I was told at the beginning of my networking career that QoS is the answer to every problem. You just have to know how to ask the right question. Even MPLS has fallen into the category at one point in the past. MPLS-ing the entire world just makes it run better, right? Much like my drill analogy above, once the “newness” wore off of these protocols and solutions, we found out that they are really well suited for a much more narrow purpose. MPLS and QoS tend to be used for the things that they are very good at doing and maybe for a few corner cases outside of that focus. That’s why we still need to rely on many other protocols and technologies to have a complete toolbox.

SDN has had the “new tool mentality” for the past few months. There’s no denying at this point that it’s a disruptive technology and ripe to change the way that people like me looking at networking. However, to say that it will eventually become the de facto standard for everything out there and the only way to accomplish networking in the next three years may be stretching things just a bit. I’m pretty sure that SDN is going to have a big impact on my work as an integrator. I know that many of the higher education institutions that I talk to regularly are not only looking at it, but in the case of things like Internet2, they’re required to have support for SDN (the OpenFlow flavor) in order to continue forward with high speed connections. I’ve purposely avoided launching myself into the SDN fray for the time being because I want to be sure I know what I’m talking about. There’s quite a few people out there talking about SDN. Some know what they’re talking about. Others see it as a way to jump into the discussion with a loud voice just to be heard. The latter are usually the ones talking about SDN as a destuctive force that will cause us all to be flipping burgers in two years. Rather than giving credence to their outlook on things, I would say to wait a bit. The new shinyness of SDN will eventually give way to a more realistic way of looking at its application in the networking world.  Then, it will be the best tool for the jobs that it’s suited for.  Of course, by then we’ll have some other new tool to proclaim as the end-all, be-all of networking, but that’s just the way things are.

vRAM – Reversal of (costing us a) Fortune

A bombshell news item came across my feed in the last couple of days.  According to a source that gave information to CRN, VMware will being doing away with the vRAM entitlement licensing structure.  To say that the outcry of support for this rumored licensing change was uproarious would be the understatement of the year.  Ever since the changes in vSphere 5.0 last year, virtualization admins the world over have chafed at the prospect of having the amount of RAM installed in their systems artificially limited via a licensing structure.

On the surface, this made lots of sense.  VMware has always been licensed on a per-socket processor license.  Back in the old days, this made a lot of sense.  If you needed a larger, more powerful system you naturally bought more processors.  With a lot more processors, VMware made a lot more money.  Then, Intel went and started cramming more and more cores onto a processor die.  This was a great boon for the end user.  Now you could have two, four, or even eight processors in one socket.  Who cares if I have more than two sockets?  Once the floodgates opened on the multi-core race, it became a huge competition to increase core density to keep up with Moore’s Law.  For companies like VMware, the multi-core arms race was a disaster.  If the most you are ever going to make from a server is two processor licenses no matter how many virtual machines get crammed into it then you are royally screwed.  I’m sure the scurrying around VMware to find a new revenue source kicked into high gear once companies like Cisco started producing servers with lots of processor cores and more than enough horsepower to run a whole VM cluster.  That’s when VMware hit on a winner.  If processor cores are the big engine that drives the virtualization monster truck, then RAM is the gas in the gas tank.  Cisco and others loaded down those monster two-socket boxes with enough RAM to sink an aircraft carrier.  They had to in order to keep those processors humming along.  VMware stepped in and said, “We missed the boat on processor cores.  Let’s limit the amount of RAM to specific licenses.”  Their first attempt at vRAM was a huge headache.  The RAM entitlements were half of what they are now.  Only after much name calling and pleading on the part of the customer base did VMware double it all to the levels that we see today.

According to VMware, the vRAM entitlements didn’t affect the majority of their customers.  The ones that needed the additional RAM were already running the Enterprise or Enterprise Plus licenses.  However, what it did limit is growth.  Now, if customer has been running an Enterprise Plus license for their two-socket machine and the time for an upgrade comes along, they won’t get to order all that extra RAM like Cisco or HP would want them to do.  Why bother ordering more than 192GB of RAM if I have to buy extra licenses just to use it?  The idea that I can just have those processor licenses floating around for use with other machines is just as silly in my mind.  If I bought one server with 256GB of RAM and needed 3 licenses to use it all, I’m likely going to buy the same server again.  Then I have 6 license for 4 processors.  Sure, I could buy another server if I wanted, but I’d have to load it with something like 80GB of RAM, unless I wanted to buy yet another license.  I’m left with lots of leftover licenses that I’m not going to utilize.  That makes the accounting department unhappy.  Telling the bean counters that you bought something but you can’t utilize it all because of an aritificial limitation makes them angry.  Overall, you have a decision that makes engineering and management unhappy.

If the rumor from CRN is true, this is a great thing for us all.  It means we can concentrate more on solutions and less on ensuring we have counted the number of processors, real or imagined.  In addition, the idea that VMware might being bundling other software, such as vCloud Director is equally appealing.  Trying to convince my bean counters that I want to try this extra cool thing that doesn’t have any immediate impact but might save money down the road is a bit of a stretch.  Telling them it’s a part of the bundle we have to buy is easy.  Cisco has done this to great effect with Unified Workspace Licensing and Jabber for Everyone.  If it’s already a part of the bundle, I can use it and not have to worry about justifying it.  If VMware does the same thing for vCloud Director and other software, it should open doors to a lot more penetration of interesting software.  Given that VMware hasn’t outright said that this isn’t true, I’m willing to be that the announcement will be met with even more fanfare from the regular trade press.  Besides, after the uproar of support for this decision, it’s going to be hard for VMware to back out now.  These kinds of things aren’t really “leaked” anymore.  I’d wager that this was done with the express permission of the VMware PR department as a way to get a reaction before VMworld.  If the community wasn’t so hot about it, the announcement would have been buried at the end of the show.  As it is, they could announce only this change at the keynote and the audience would give a standing ovation.


Tom’s Take

I hate vRAM.  I think it’s a very backwards idea designed to try and put the genie back in the bottle after VMware missed the boat on licensing processor cores instead of sockets.  After spending more than a year listening to the constant complaining about this licensing structure, VMware is doing the right thing by reversing course and giving us back our RAM.  Solution bundles are the way to go with a platform like the one that VMware is building.  By giving us access to software we won’t otherwise get to run, we can now build bigger and better virtualized clusters.  When we’re dependent on all this technology working in concert, that’s when VMware wins.  When we have support contracts and recurring revenue pouring into their coffers because we can’t live without vCloud Director of vFabric Manager.  Making us pay a tax on hardware is a screwball idea.  But giving us a bit of advanced software for nothing with a bundle we’re going to buy anyway so we are forced to start relying on it?  That’s a pretty brilliant move.

Welcome To The vExpert Class of 2012

It appears that I’ve been placed in some rarified company. In keeping with my goals for this year, I wanted to start writing more about virtualization. I do a lot of work with it in my day job and figured I should devote some time to talking about it here. I decided at the last minute to sign up for the VMware vExpert program as a way to motivate myself to spend more time on the topic of virtualization. Given that I work for a VMware partner, I almost signed up through the partner track. However, it was more important to me to be an independent vExpert and be considered based on the content on my writing. I’d seen many others talking about their inclusion into the program already via pictures and welcome emails. So it was that I figured I’d just been passed over due to lack of VMware content on my blog.

On Sunday, April 15th, VMware announced the list of vExperts for 2012. I browsed through the list after I woke up, curious to see if friends like Stephen Foskett (@SFoskett) and Maish Saidel-Keesing (@MaishSK) were still there. Imagine my surprise when I found my name in the first page of the list (they alphabetize by first name, and I’d signed up under “Alfred”). I was shocked to say the least. This means that I can now count myself among a group of distinguished individuals in virtualization. I’m an evangelist now, even if just officially. I’ve been a huge advocate of using VMware solutions for servers for a while now. This designation just means that I’m going to be spending even more time working with VMware, as well as coming up with good topics to write about. It also makes sense to me that with my desire to chase after the VCAP-DCA and VCAP-DCD to further my virtualization education, the blogging opportunities for these topics are very possible.

A vExpert isn’t the final word in virtualization. I recognize that I’ve got quite a bit to learn when it comes to the ins-and-outs of large scale virtualization. What the vExpert designation means to me is that I’ve shown my desire to learn more about these technologies and share them with everyone. There are a lot of great bloggers out there doing this very thing already. I’m excited and humbled to be included in their ranks for the coming year. I just hope I can keep up with the expectations that come with being a vExpert and reward the faith that John Troyer (@jtroyer) and Alex Maier (@lxmaier) have show in me.