My Virtualization Primer

When I gave my cloud presentation earlier this year, I did indeed have about 10% of my audience walk out on my presentation by the end. I couldn’t really figure out why either. I thought that an overview of the cloud was a great topic to bring up among people that might not otherwise know much about it. Through repeated viewings of my presentation, I think I realize when I lost most everyone. I should have stopped after my cloud section and spent the rest of the time clarifying everything. Instead, I barrelled through the next section on virtualization with wild abandon, as if I was giving this presentation to a group of people that were already doing it. Instead, I should have split the two and focused on presenting virtualization in its own session.

When I got the chance to present again at the fall edition of this conference, I jumped at the chance. Here was my opportunity to erase my mistake and spend more time on the “how” of things. Coupled with my selection as a vExpert, I figured it was about time for me to evangelize all the great things about virtualization. If you are at all familiar with virtualization, this is going to be a pretty boring presentation to watch. Here’s a link to my slide deck (PDF Warning):

Here’s the video to go along with it:

Not my worst presentation. I felt it came off rather conversationally this time instead of a lecture. We did have some good discussion before the video started rolling that I wish I had captured. One of the things that really took me by surprise was the lack of questions. I don’t know if that’s because people are just being generally polite or if they’re worried about the quality of their question. I’m used to being in presentations at Tech Field Day where the delegates aren’t afraid to voice their opinions about things. I’m beginning to wonder if that is the exception to the rule. Even at other presentations that I’ve been to locally, the audience seems to be on the quiet side for the most part. I’ve even considered doing a TFD-style presentation of about two or three slides and the rest becomes a big discussion. I know I’d get a lot out of that, but I’m not sure my audience would appreciate it as much.

I’ve also noticed that I do need to start being careful when I’m in other presentations. In one that I attended two days after this video was made, I had to strongly resist the urge to correct a presenter on something. An audience member asked a question about BYOD security posture and classification and the answer that was received wasn’t what I would have wanted to get. I decided that discretion was the better part of valor and kept my mouth shut. What about you? If the presenter is saying something totally wrong or has missed the point entirely, would you say something?

Tom’s Take

In the end, most of it comes down to practice. When you assemble your slide deck and practice it a couple of times, you should feel good about the material. Don’t be one of those presenters that gets caught off guard by your own slide transitions. Don’t laugh, it happened in a different presentation. For me, the key going forward is going to be to reduce the slides and spend more time on the conversation. I’ve already decided that my content for 2013 is going to focus around IPv6. People have been coming to me asking about my original IPv6 presentation from 2011, and due to the final exhaustion of IPv4 from RIPE and ARIN, I think it’s time to revisit that one with a focus on real-world experience. That does mean that I’m going to have a lot of my plate in the next few months, but when I am done I’m going to have a lot of good anecdotes to tell.

SDN and the IT Toolbox

There’s been a *lot* of talk about software-defined networking (SDN) being the next great thing to change networking. Article after article has been coming out recently talking about how things like the VMware acquistion of Nicira is going to put network engineers out of work. To anyone that’s been around networking for a while, this isn’t much different than the talk that’s been coming out about any one of a number of different technologies over the last decade.

I’m an IT nerd. I can work on a computer with my eyes closed. However, not everything in my house is a computer. Sometimes I have work on other things, like hanging mini-blinds or fixing a closet door. For those cases, I have to rely on my toobox. I’ve been building it up over the years to include all the things one might need to do odd jobs around the house. I have a hammer and a big set of screwdrivers. I have sockets and wrenches. I have drills and tape measures. The funny thing about these tools is the “new tool mentality”. Every time I get a new tool, I think of all the new things that I can do with it. When I first got my power drill, I was drilling holes in everything. I hung blinds with ease. I moved door knobs. I looked for anything and everything I could find to use my drill for. The problem with that mentality is that after a while, you find that your new tool can’t be used for every little job. I can’t drive a nail with a drill. I can’t measure a board with a drill. In fact, besides drilling holes and driving screws, drills aren’t good for a whole lot of work. With experience, you learn that a drill is a great tool for a range of uses.

This same type of “new tool mentality” is pervasive in IT as well. Once we develop a new tool for a purpose, we tend to use that tool to solve almost every problem. In my time in IT, I have seen protocols being used to solve every imaginable problem. Remember ATM? How about LANE? If we can make everything ATM, we can solve every problem. How about QoS? I was told at the beginning of my networking career that QoS is the answer to every problem. You just have to know how to ask the right question. Even MPLS has fallen into the category at one point in the past. MPLS-ing the entire world just makes it run better, right? Much like my drill analogy above, once the “newness” wore off of these protocols and solutions, we found out that they are really well suited for a much more narrow purpose. MPLS and QoS tend to be used for the things that they are very good at doing and maybe for a few corner cases outside of that focus. That’s why we still need to rely on many other protocols and technologies to have a complete toolbox.

SDN has had the “new tool mentality” for the past few months. There’s no denying at this point that it’s a disruptive technology and ripe to change the way that people like me looking at networking. However, to say that it will eventually become the de facto standard for everything out there and the only way to accomplish networking in the next three years may be stretching things just a bit. I’m pretty sure that SDN is going to have a big impact on my work as an integrator. I know that many of the higher education institutions that I talk to regularly are not only looking at it, but in the case of things like Internet2, they’re required to have support for SDN (the OpenFlow flavor) in order to continue forward with high speed connections. I’ve purposely avoided launching myself into the SDN fray for the time being because I want to be sure I know what I’m talking about. There’s quite a few people out there talking about SDN. Some know what they’re talking about. Others see it as a way to jump into the discussion with a loud voice just to be heard. The latter are usually the ones talking about SDN as a destuctive force that will cause us all to be flipping burgers in two years. Rather than giving credence to their outlook on things, I would say to wait a bit. The new shinyness of SDN will eventually give way to a more realistic way of looking at its application in the networking world.  Then, it will be the best tool for the jobs that it’s suited for.  Of course, by then we’ll have some other new tool to proclaim as the end-all, be-all of networking, but that’s just the way things are.

vRAM – Reversal of (costing us a) Fortune

A bombshell news item came across my feed in the last couple of days.  According to a source that gave information to CRN, VMware will being doing away with the vRAM entitlement licensing structure.  To say that the outcry of support for this rumored licensing change was uproarious would be the understatement of the year.  Ever since the changes in vSphere 5.0 last year, virtualization admins the world over have chafed at the prospect of having the amount of RAM installed in their systems artificially limited via a licensing structure.

On the surface, this made lots of sense.  VMware has always been licensed on a per-socket processor license.  Back in the old days, this made a lot of sense.  If you needed a larger, more powerful system you naturally bought more processors.  With a lot more processors, VMware made a lot more money.  Then, Intel went and started cramming more and more cores onto a processor die.  This was a great boon for the end user.  Now you could have two, four, or even eight processors in one socket.  Who cares if I have more than two sockets?  Once the floodgates opened on the multi-core race, it became a huge competition to increase core density to keep up with Moore’s Law.  For companies like VMware, the multi-core arms race was a disaster.  If the most you are ever going to make from a server is two processor licenses no matter how many virtual machines get crammed into it then you are royally screwed.  I’m sure the scurrying around VMware to find a new revenue source kicked into high gear once companies like Cisco started producing servers with lots of processor cores and more than enough horsepower to run a whole VM cluster.  That’s when VMware hit on a winner.  If processor cores are the big engine that drives the virtualization monster truck, then RAM is the gas in the gas tank.  Cisco and others loaded down those monster two-socket boxes with enough RAM to sink an aircraft carrier.  They had to in order to keep those processors humming along.  VMware stepped in and said, “We missed the boat on processor cores.  Let’s limit the amount of RAM to specific licenses.”  Their first attempt at vRAM was a huge headache.  The RAM entitlements were half of what they are now.  Only after much name calling and pleading on the part of the customer base did VMware double it all to the levels that we see today.

According to VMware, the vRAM entitlements didn’t affect the majority of their customers.  The ones that needed the additional RAM were already running the Enterprise or Enterprise Plus licenses.  However, what it did limit is growth.  Now, if customer has been running an Enterprise Plus license for their two-socket machine and the time for an upgrade comes along, they won’t get to order all that extra RAM like Cisco or HP would want them to do.  Why bother ordering more than 192GB of RAM if I have to buy extra licenses just to use it?  The idea that I can just have those processor licenses floating around for use with other machines is just as silly in my mind.  If I bought one server with 256GB of RAM and needed 3 licenses to use it all, I’m likely going to buy the same server again.  Then I have 6 license for 4 processors.  Sure, I could buy another server if I wanted, but I’d have to load it with something like 80GB of RAM, unless I wanted to buy yet another license.  I’m left with lots of leftover licenses that I’m not going to utilize.  That makes the accounting department unhappy.  Telling the bean counters that you bought something but you can’t utilize it all because of an aritificial limitation makes them angry.  Overall, you have a decision that makes engineering and management unhappy.

If the rumor from CRN is true, this is a great thing for us all.  It means we can concentrate more on solutions and less on ensuring we have counted the number of processors, real or imagined.  In addition, the idea that VMware might being bundling other software, such as vCloud Director is equally appealing.  Trying to convince my bean counters that I want to try this extra cool thing that doesn’t have any immediate impact but might save money down the road is a bit of a stretch.  Telling them it’s a part of the bundle we have to buy is easy.  Cisco has done this to great effect with Unified Workspace Licensing and Jabber for Everyone.  If it’s already a part of the bundle, I can use it and not have to worry about justifying it.  If VMware does the same thing for vCloud Director and other software, it should open doors to a lot more penetration of interesting software.  Given that VMware hasn’t outright said that this isn’t true, I’m willing to be that the announcement will be met with even more fanfare from the regular trade press.  Besides, after the uproar of support for this decision, it’s going to be hard for VMware to back out now.  These kinds of things aren’t really “leaked” anymore.  I’d wager that this was done with the express permission of the VMware PR department as a way to get a reaction before VMworld.  If the community wasn’t so hot about it, the announcement would have been buried at the end of the show.  As it is, they could announce only this change at the keynote and the audience would give a standing ovation.


Tom’s Take

I hate vRAM.  I think it’s a very backwards idea designed to try and put the genie back in the bottle after VMware missed the boat on licensing processor cores instead of sockets.  After spending more than a year listening to the constant complaining about this licensing structure, VMware is doing the right thing by reversing course and giving us back our RAM.  Solution bundles are the way to go with a platform like the one that VMware is building.  By giving us access to software we won’t otherwise get to run, we can now build bigger and better virtualized clusters.  When we’re dependent on all this technology working in concert, that’s when VMware wins.  When we have support contracts and recurring revenue pouring into their coffers because we can’t live without vCloud Director of vFabric Manager.  Making us pay a tax on hardware is a screwball idea.  But giving us a bit of advanced software for nothing with a bundle we’re going to buy anyway so we are forced to start relying on it?  That’s a pretty brilliant move.

Welcome To The vExpert Class of 2012

It appears that I’ve been placed in some rarified company. In keeping with my goals for this year, I wanted to start writing more about virtualization. I do a lot of work with it in my day job and figured I should devote some time to talking about it here. I decided at the last minute to sign up for the VMware vExpert program as a way to motivate myself to spend more time on the topic of virtualization. Given that I work for a VMware partner, I almost signed up through the partner track. However, it was more important to me to be an independent vExpert and be considered based on the content on my writing. I’d seen many others talking about their inclusion into the program already via pictures and welcome emails. So it was that I figured I’d just been passed over due to lack of VMware content on my blog.

On Sunday, April 15th, VMware announced the list of vExperts for 2012. I browsed through the list after I woke up, curious to see if friends like Stephen Foskett (@SFoskett) and Maish Saidel-Keesing (@MaishSK) were still there. Imagine my surprise when I found my name in the first page of the list (they alphabetize by first name, and I’d signed up under “Alfred”). I was shocked to say the least. This means that I can now count myself among a group of distinguished individuals in virtualization. I’m an evangelist now, even if just officially. I’ve been a huge advocate of using VMware solutions for servers for a while now. This designation just means that I’m going to be spending even more time working with VMware, as well as coming up with good topics to write about. It also makes sense to me that with my desire to chase after the VCAP-DCA and VCAP-DCD to further my virtualization education, the blogging opportunities for these topics are very possible.

A vExpert isn’t the final word in virtualization. I recognize that I’ve got quite a bit to learn when it comes to the ins-and-outs of large scale virtualization. What the vExpert designation means to me is that I’ve shown my desire to learn more about these technologies and share them with everyone. There are a lot of great bloggers out there doing this very thing already. I’m excited and humbled to be included in their ranks for the coming year. I just hope I can keep up with the expectations that come with being a vExpert and reward the faith that John Troyer (@jtroyer) and Alex Maier (@lxmaier) have show in me.

Solarwinds – Network Field Day 3

The first presenter up for Network Field Day 3 was a familiar face to many Tech Field Day viewers.  Solarwinds presented at the first Network Field Day and has been a sponsor of more events than any other.  It’s always nice to see vendors coming back time and again to show the delegates what they’ve been cooking since their last appearance.

We started our day in the Doubletree San Jose boardroom.  We were joined by Joel Dolisy, the Chief Software Architect for Solarwinds and Mav Turner (@mavturner), the Senior Product Manager for the network software division.  After introductions, we jumped right into some of the great software that Solarwinds makes for network engineers.  First up was the Solarwinds IP SLA Monitor.  IP Service Level Agreement (SLA) is a very important tool used by engineers to track key network metrics like reachability and latency.  What makes IP SLA so great as opposed to a bigger monitoring tool is that the engineer can take the information from IP SLA and use it to create actionable items, such as bringing down an overloaded link or sending trap information to the third-party monitoring system to alert key personnel when something is amiss.  One of the sore spots about IP SLA from my perspective is the difficulty that I have in setting it up.  Thankfully, Solarwinds thought of that for me already.  No only can the IP SLA Monitor show me all the pertinent details about a given IP SLA configuration, I can even create a new one on the fly if needed.  IP SLA Monitor allows me to push the configurations down to a single router, or to multiple routers as quickly as I can select interfaces and metrics to track.  It’s a very interesting product, especially when you know that it grew out of a simple way to manage Voice over IP (VoIP) call metrics.  When Solarwinds realized the potential of the program, they immediately added more features and enabled it across a whole host of protocols.  If you’d like to try it out on a single router, you can download the free version here.

During the presentation, I asked Solarwinds about adding some additional wireless troubleshooting capabilities to the product lines, courtesy of a request from Blake Krone (@BlakeKrone).  One thing that Joel and Mav said was that Solarwinds adds the large majority of their new features based on customer response and request.  I do admire that a company that is so highly regarded by most engineers I know is willing to sit down and make sure that customer needs are addressed in such a manner.  That way, the features that get added into the program really do come from the desires of the userbase.  The only thing that might give me pause this arrangement is that Solarwinds may be missing an opportunity to drive some development around new features by waiting for people to ask for them.  Many times I’ve looked at a piece of software and seen a curious feature in a list only to realize that I never knew I needed it.  I hope that Solarwinds is keeping up with the rapid pace of software development and ensuring that the hottest new technologies are being supported as quickly as possible in their flagship Orion platform.

One thing that Solarwinds took some additional time to show off to us was their Virtualization Manager.  An acquisition from Hyper9 last year, Virtualization Manager allows Solarwinds to hook into the VMware vCenter APIs to find all kinds of interesting things like orphaned VMs or performance issues.  You can create custom alerts on these data points to let you know if a VM goes missing after a difficult vMotion or if your hypervisors have become CPU or memory bound.  You can also archive configs and perform capacity planning and a whole host of other useful features.  One of the nicest things, though, was the fact that the UI was completely devoid of Flash!  Everything was written with HTML5 so that there is no need to worry about whether you’re using the correct device to manage your VM infrastructure’s web portal.  This was a big win for the assembled delegates, as management systems that require proprietary scripting languages or horrendously laggy and memory hungry plugins tend to make us cranky at best.

We also had some good discussions toward the end around building Linux-based polling devices and how extensible the querying capabilities can be inside of Orion.  I think this kind of flexibility is huge in allowing me to craft the tool to my needs instead of the other way around.  When you think about it, there aren’t that many companies that are willing to provide you the framework to rebuild the tool to your environment.  That’s one thing that Solarwinds has in the their favor.

If you’d like to learn more about the various offerings that Solarwinds has available, you can check them out at http://www.solarwinds.com/.  You can also follow them on Twitter at their new handle, @solarwinds

Tom’s Take

Solarwinds has been making tools that make my life easier for quite some time.  They’ve also been offering them for free for a while as well.  This is a great way for people to figure out if the larger collection of tools in the Orion suite will be a good fit for what they want to do with their network.  I think the large number of tools can be daunting for an engineer just starting out or one that’s in over their head.  While the overview we received was a wonderful peek at things, Solarwinds needs to take the time to be sure the educate users to the tool capabilities, both free and paid.  I also feel that Solarwinds needs to take the time to develop some software functionality independently of user requests.  I know that the majority of the features they build into their tools are requested by users.  But as I said above, sometimes the feature I need is the one I didn’t know could be done until I read the release notes.

Tech Field Day Disclaimer

Solarwinds was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me with a coffee cup.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Automating vSphere with VMware vCenter Orchestrator – Review

I’ll be honest.  Orchestration, to me, is something a conductor does with the Philharmonic.  I keep hearing the word thrown around in virtualization and cloud discussions but I’m never quite sure what it means.  I know it has something to do with automating processes and such but beyond that I can’t give a detailed description of what is involved from a technical perspective.  Luckily, thanks to VMware Press and Cody Bunch (@cody_bunch) I don’t have to be uneducated any longer:

One of the first books from VMware Press, Automating vSphere with VMware vCenter Orchestrator (I’m going to abbreviate to Automating vSphere) is a fine example of the type of reference material that is needed to help sort through some of the more esoteric concepts surrounding virtualization and cloud computing today.  As I started reading through the introduction, I knew immediately that I was going to enjoy this book immensely due to the humor and light tone.  It’s very easy to write a virtualization encyclopedia.  It’s another thing to make it readable.  Thankfully, Cody Bunch has turned what could have otherwise been a very dry read into a great reference book filled with Star Trek references and Monty Python humor.

Coming in at just over 200 pages with some additional appendices, this book once again qualifies as “pound cake reading”, in that you need to take your time and understand that length isn’t the important part, as the content is very filling.  The author starts off by assuming I know nothing about orchestration and filling me in on the basics behind why vCenter Orchestrator (vCO) is so useful to overworked server/virtualization admins.  The opening chapter makes a very good case for the use of orchestration even in smaller environments due to the consistency of application and scalability potential should the virtualization needs of a company begin to increase rapidly.  I’ve seen this myself many times in smaller customers.  Once the restriction of one server to one operating system is removed, virtualized servers soon begin to multiply very quickly.  With vCO, managing and automating the creation and curation of these servers is effortless.  Provided you aren’t afraid to get your hands dirty.  The rest of Part I of the book covers the installation and configuration of vCO, including scenarios where you want to split the components apart to increase performance and scalability.

Part II delves into the nuts and bolts of how vCO works.  Lots of discussions about workflows that have containers that perform operations.  When presented like this, vCO doesn’t look quite as daunting to an orchestration rookie.  It’s important to help the new people understand that there really isn’t a lot of magic in the individual parts of vCO.  The key, just like a real orchestra, is bringing them together to create something greater than the sum of its parts.  The real jewel of the book to me was Part III, as case study with a fictional retail company.  Case studies are always a good way to ground readers in the reality and application of nebulous concepts.  Thankfully, the Amazing Smoothie company is doing many of the things I would find myself doing for my customers on a regular basis. I enjoyed watching the workflows and Javascript come together to automate menial tasks like consolidating snapshots or retiring virtual machines.  I’m pretty sure that I’m going to find myself dog-earing many of the pages in this section in the future as I learn to apply all the nuggets contained within to real life scenarios for my own environment as well as that of my customers.

If you’d like to grab this book, you can pick it up at the VMware Press site or on Amazon.


Tom’s Take

I’m very impressed with the caliber of writing I’m seeing out of VMware Press in this initial offering.  I’m not one for reading dry documentation or recitation of facts and figures.  By engaging writers like Cody Bunch, VMware Press has made it enjoyable to learn about new concepts while at the same time giving me insight into products I never new I needed.  If you are a virtualization admin that manages more than two or three servers, I highly recommend you take a peak at this book.  The software it discusses doesn’t cost you anything to try, but the sheer complexity of trying to configure it yourself could cause you to give up on vCO without a real appraisal of its capabilities.  Thanks to VMware Press and Cody Bunch, the amount of time and effort you save from buying this book will easily be offset by gains in productivity down the road.

Book Review Disclaimer

A review copy of Automating vSphere with VMware vCenter Orchestrator was provided to me by VMware Press.  VMware Press did not ask me to review this book as a condition of providing the copy.  VMware Press did not ask for nor were they promised any consideration in the writing of this review.  The thoughts and opinions expressed herein are mine and mine alone.

2011 in Review, 2012 in Preview

2011 was a busy year for me.  I set myself some rather modest goals exactly one year ago as a way to keep my priorities focused for the coming 365 days.  How’d I do?

1. CCIE R&S: Been There. Done That. Got the Polo Shirt.

2. Upgrade to VCP4: Funny thing.  VMware went and released VMware 5 before I could get my VCP upgraded.  So I skipped straight over 4 and went right to 5.  I even got to go to class..

3. Go for CCIE: Voice: Ha! Yeah, I was starting to have my doubts when I put that one down on the list.  Thankfully, I cleared my R&S lab.  However, the thought of a second track is starting to sound compelling…

4. Wikify my documentation: Missed the mark on this one.  Spent way to much time doing things and not enough time writing them all down.  I’ll carry this one over for 2012.

5. Spend More Time Teaching: Never got around to this one.  Seems my time was otherwise occupied for the majority of the year.

Forty percent isn’t bad, right?  Instead, I found myself spending time becoming a regular guest on the Packet Pushers podcast and attending three Tech Field Day Events: Tech Field Day 5, Wireless Field Day 1, and Network Field Day 2.  I’ve gotten to meet a lot of great people from social media and made a lot of new friends.  I even managed to keep making blog posts the whole year.  That, in and of itself, is an accomplishment.

What now?  I try to put a couple of things out there as a way to hold myself to the fire and be accountable for my aspirations.  That way, I can look back in 2013 and hopefully hit at least 50% next time.  Looking forward to the next 366 days (356 if the Mayans were right):

1. Juniper – I think it’s time to broaden my horizons.  I’ve talked to the Juniper folks quite a bit in 2011.  They’ve given me a great overview of how their technology works and there is some great potential in it.  Juniper isn’t something I run into every day, but I think it would be in my best interest to start learning how to get around in the curly CLI.  After all, if they can convert Ivan, they must really have some good stuff.

2. Data Center – Another growth area that I feel I have a lot of catching up to do is in the data center.  I feel comfortable working on NX-OS somewhat, but the lack of time I get to configure it every day makes the rust a little thick some times.  If it wasn’t for guys like Tony Mattke and Jeff Fry, I’d have a lot more catching up to do.  When you look at how UCS is being positioned by Cisco and where Juniper wants to take QFabric, I think I need to spend some time picking up more data center technology.  Just in case I find myself stranded in there for an extended period of time.  Can’t have this turning into the Lord of the CLIs.

3. Advanced Virtualization – Since I finally upgraded my VCP to version 5, I can start looking at some of the more advanced certifications that didn’t exist back when I was a VCP3.  Namely the VCAP.  I’m a design junkie, so the DCD track would be a great way for me to add some of the above data center skills while picking up some best practices.  The DCA troubleshooting training would be ideal for my current role, since anything beyond a simple check of vCenter is all I can muster in the troubleshooting arena.  I’d rather spend some time learning how the ESXi CLI works than fighting with a mouse to admin my virtual infrastructure.

4. Head to The Cloud – No, not quite what you’re thinking.  I suffered an SSD failure this year and if it hadn’t been for me having two hard drives in my laptop, I’d probably have lost a good portion of my files as well.  I keep a lot of notes on my laptop and not all of them are saved elsewhere.  Last year I tried to wikify everything and failed miserably.   This year I think I’m going to take some baby steps and get my important documents and notes saved elsewhere and off my local drives.  I’m looking to replace my OneNote archive with Evernote and keep my important documents in Google Docs as opposed to local Microsoft Word.  By keeping my important documents in the cloud, I don’t have to sweat the next drive death quite as much.

The free time that I seem to have acquired now that I’ve conquered the lab seems to have been filled with a whole lot of nothing.  In this industry, you can’t sit still for very long or you’ll find yourself getting passed by almost everyone and everything.  I need to sharpen my focus back to these things to keep moving forward and spend less time sitting on my laurels.  I hope to spend even more time debating technology with the Packet Pushers and engaging with vendors at Tech Field Day.  Given how amazing and humbling 2011 was, I can’t wait to see what 2012 has in store for me.

VMware vSphere: What’s New [5.0] – Review

As I spend a lot of my time in training and learning about new technologies, I thought it might be a good idea to start reviewing the classes that I attend to help my readers figure out how to get the best out of their training dollars.  Recently, I had the opportunity to attend the 2-day VMware vSphere: What’s New [5.0] class.

If you are thinking about becoming a VMware Certified Professional (VCP), you’re going to need to go to class.  It’s a requirement for certification.  I don’t necessarily agree with this though.  No other certification I hold requires me to go to class.  The CISSP requires a certain level of experience, and when I looked at the Certified Ethical Hacker (CEH) requirements, they said that their required class could be waived with demonstrable experience.  So the fact that VMware is making me go to class is kind of irritating.  That’s even taking into account that my employer sees the usefulness of staying certified and lets me attend a large number of classes.  I really feel for the independent contractors that need to be VCPs to get into the field but can’t afford to either pay for the class or take the time off for 2-4 days to attend one.  There should be some kind of waiver for people that can demonstrate experience with VMware.  Yes, I know that if you are a 1-step removed VCP (VCP4 in this case) you don’t have to go to class.  Yes, I know that there are very good reasons to make people attend class, such as keeping current with new technology and ensuring your certified user base is up on all the new features.  Yes, I know that the costs of the class are necessary for things like facilities rental and materials.  Just because I understand why it’s required and why it’s so expensive doesn’t mean I have to like it.  But, I digress…

I chose to take the 2-day What’s New class because it was a quicker way to go through the requirements as well as being valid for upgrading my VCP3 to a VCP5 until February.  The 2-day What’s New class is a condensed version of the 4-day Install, Configure, and Manage (ICM) class that introduces VMware to those that are new to virtualization.  Being condensed, the prerequisites for the course state you must be familiar with VMware.  While you don’t need to be intimately familiar with every aspect of the hypervisor and it’s settings, you had better at least be comfortable logging into vCenter and doing some basic tasks.  There won’t be much time for hand-holding in the What’s New class.

The materials for the 2-day class are a 270-page student manual with the slide deck from the class printed in note-taking format and an 80-page lab guide.  The student guide has ample annotations of the slide deck as well as space for taking notes in class.  The lab guide has places to record the information for your student lab pods so you aren’t constantly flipping back and forth to remember what your vCenter or ESXi servers are named.  The lab guide went into good detail about each task, making sure that you knew where to go to enable features or perform tasks.  The lab guide is great for those that want to do a little more practice after leaving the class in a personal lab environment.

The material covered in the class focused on the new features in vSphere 5 and how it’s different from vSphere 4.  Special attention is paid to the new storage features and the new deployment options for ESXi servers, like stateless Auto Deploy.  Thanks to the ample amount of lab time, you have a great opportunity to reinforce the topics with actual examples rather than just staring at static screens on slides.  If you get a really good instructor (like we had), you can even see live configurations of these topics on their lab machines.  Rick, our instructor, made sure to show us live examples every chance he had rather than just relying on stuffy slides.  He also did a great job going into depth on topics that deserved it, like VMware HA changes and elections.  By the way, for anyone that has ever complained about HSRP elections or STP root bridge selection, you should really check out http://www.yellow-bricks.com and get Ducan Epping’s vSphere Clustering Deep Dive book.  Therein, you will learn in vSphere 5, 99 is greater than 100 when performing HA elections.  I’ll give you hint: lexical numbers don’t follow normal rules…


Tom’s Take

Overall, I found the condensed version of class to be a much better value than the 4-day ICM course.  On the other hand, I’ve also been working with VMware for the last 3 years, so I had a good grasp on the basics.  For someone that isn’t familiar with the way virtualization works, the 4-day ICM class will give you a much more measured understanding and more time to play with the basics.  For those that have already gotten their feet wet with VMware and are just looking for a tune up or need to go take the VCP5 exam, you can’t go wrong with the 2-day short, short version of the class.  It’s going to save you a good deal of time and money that you can use to buy more licenses for vRAM.

If you’d like to see more details on the VMware education offerings or sign up for a VMware class, head over to the VMware Education Website at http://mylearn.vmware.com/portals/www/

Unable To Access User-Defined Storage Service

In my VMware vSphere: What’s New [5.0] class this week, I learned why having a lab environment to test things is very important.  I also learned that some bugs are fun to try and fix.

vSphere 5 introduced a lot of new features focused on storage.  One of these is Profile Driven Storage.  This allows users to create tiers for datastores and ensure that those profiles can be attached to VMs at a later date.  This would be very useful for someone that has ultra-fast SSD arrays like those from PureStorage alongside SAS or SATA arrays.  You can define the gold tier as the SSD array for VMs that need fast storage access, silver tier for slightly slower SAS drives and bronze tier for the large-but-slow SATA datastore.  I like this idea of allowing users to define their storage capabilities into easy to assign tiers.  However, we hit a bug when we tried to implement it in the lab.

After we created the tiers in VIClient, we went to assign them to the datastores from the Home -> Datastores and Datastore Clusters section.  When we right clicked on the datastore and chose “Assign User-Defined Storage Capability” we got hit with this error:

Unable To Access User-Defined Storage Service

Huh?  You let me configure the silly thing?  It’s got to be there somewhere!  Let me assign it to something.

Odds are good that if you are seeing this error, you’ve also installed the vSphere Web Client.  Another great option for users that don’t want to install the VMware Infrastructure Client, the Web Client allows you to access VMs from Firefox or Internet Explorer and manage them just like you would from the VIClient.  This would be useful for those out there that are running OS X and currently don’t have a way to manage VMs unless they launch the VIClient from a virtual machine or other emulated environment.  The Web Client software needs to be installed on a Windows (or Linux) machine in order to respond to requests from web browsers.  For many users that run OS X, the logical choice would be to install the Web Client service on the Windows-based vCenter Server and then use Firefox to remotely access the web client afterwards.  That’s what we did in the lab.

The problem lies in that the Web Client service conflicts with the Profile Driven Storage service.  I’m not sure if they use the same port numbers or if they just collide in memory space or something.  As long as the Web Client service is running, the Profile Driven Storage options cannot be configured on a Data Store.  The fix is somewhat simple:

1.  Open the Service console on your vCenter server.

2.  Find the VMware Web Client service.

3.  Stop or disable it.

4.  Restart VIClient.

Simple, huh?  You can now assign the User-Defined Storage profiles to all the datastores you’d like.  When you finish, close out VIClient and restart the Web Client Service so your Mac folks can administer VMs.  Just remember that every time you want to use Profile Driven Storage, you’re going to have to bounce the Web Client service.

One can only hope that this particular bug gets fixed in an upcoming point release of vSphere 5.  Not a show stopper, but I can see how it could cause issues for those that don’t know from the less-than-helpful error message where to look for help.  I’m just glad I found it in a learning lab and not in production.

BYOD: High School Never Ends

There is a lot of buzz around about the porting of applications to every conceivable platform.  Most of it can be traced back to a movement in the IT/user world known as Bring Your Own Device (BYOD), the idea that a user can bring in their own personal access device and still manage to perform their job functions.  I’m going to look at BYOD and why I think that it’s more of the same stuff we’ve been dealing with since lunch period in high school.

BYOD isn’t a new concept.  Contractors and engineers have been doing it for years.  Greg Ferro and Chris Jones would much prefer bringing their own Macbooks to a customer’s site to get the job done.  Matthew Norwood would prefer to have just about anything other than the corporate dinosaur that he babies through boot up and shut down.  Even I have my tastes when it comes to laptops.  Recently though, the explosion of smartphones and tablets has caused a shift toward more ubiquitous computing.  It now seems to be a bullet point requirement that your software or hardware has a support app in a cloud app repository or the capability to be managed from a 3.5″ capacitive touch screen.  Battle lines are drawn around whether or not your software is visible on a Fruit Company Mobile Device or a Robot Branded Smarty Phone.  Users want to drag in any old tablet and expect to do their entire job function from 7″ screen.

However, while BYOD is all about running software from any endpoint, the driving forces behind it aren’t quite as noble.  I think once I start describing how I see things, you’ll start noticing a few parallels, especially if you have teenagers.

– BYOD is about prestige.  Who usually starts asking about running an app on an iPad?  Well, besides the office Gadget Nerd that ran out and stood in line for 4 hours and ran out of the store screeching like kid in a candy store?  Odds are, it’s the CxO that comes to you and informs you that they’ve just purchased a Galaxy Tablet and they would like it setup.  The device is gingerly handed to you to perform your IT voodoo on, all while the executive waits patiently.  Usually, there is some kind of interjection from them about how they got a good deal and how the drone at the store told them it had a lot of amazing features.  The CxO usually can’t wait to show it around after you’ve finished syncing their mail and calendar and pictures of their expensive dogs.  Wanna know why?  Because it’s a status symbol.  They want to show off all the things it can do to those that can’t get one.  Whether it be due to being overpriced or unavailable from any supply chain, there are some people that revel in rubbing people’s noses in opulence.  By showing off how their tablet or smartphone gets emails and surfs the web, they are attempting to widen the IT class gap.  Sound like high school to you?  Air Jordans? Expensive blue jeans? Ringing any bells?  The same kind of people that liked to crow that their parents bought them a BMW in high school are the same ones that will gladly show off their iPad or Galaxy Tab solely for the purpose of snubbing you.  They could really care less about doing their job from it.

– BYOD is about entitlement.  I could go on and on about this one, but I’ll try to keep it on topic.  There seems to be a growing movement in the younger generation that you as a company owe them something for coming to work for you.  They want things like nap time or gold stars next to their names for doing something.  No, really.  This naturally extends to their choice of work device.  I’m going to pick on Mac users here because that particular device comes up more often that not, but it extends to Linux users and Windows users as well.  The “entitled” user thinks that you should change your entire network architecture to suit their particular situation.  Something like this:

User: I can’t get my mail.

Admin: You’re using the Fail Mail client.  We’re on Exchange.  You’ll need to use Outlook.

User: I’m not installing Office on my system!  Microsoft is a cold-hearted company that murders orphans in Antarctica.  Fail Mail donates $.25 of every shareware license to the West Pacific Tree Slug Preservation Society.  I want to use my mail client.

Admin: I guess you could use the webmail…

User: How about you use the Fail Mail Server instead?  They donate $2 of every purchase to fungus research.  I think it’s a much more capable server than dumb old Exchange anyway.

Admin: <facepalm>

I hope this doesn’t sound familiar.  One of the great joys of IT is telling users you aren’t going to reinvent the wheel just to mollify them.  However, in many cases the user demanding your change everything happens to sign your paycheck.  That does have the effect of ripping out one mail server or reprogramming a whole tool because it used/didn’t use Flash/HTML 5.

– BYOD is about never changing your perspective.  I have an iPad.  And an iPhone.  And a behemoth Lenovo w701 laptop.   And I use them all.  Often, I use them at the same time.  I see each as a very capable tool for what it’s designed to do.  I don’t read ebooks on my iPhone.  I don’t run virtual machines on my iPad.  And I don’t use my laptop for texting or phone calls.  Just like I don’t use screwdrivers like chisels or use a pipe wrench like a hammer.  However, there are some people that like picking up one device and never putting it down.  These people seem to believe that the world would be a more perfect place if they could sit in their chair and do their whole job from a touch screen.  They feel that moving to a laptop to type a blog post is a travesty.  Being forced to use a high-powered graphical desktop for CAD work is unthinkable.  I have to admit that I’ve tried to see things from their perspective.  I’ve tried to use my iPad to take notes and remotely administer servers.  Guess what?  I just couldn’t do it.  I’m a firm believer that tools should be used according to their design, rather than having a 56-in-one tool that does a lot of things poorly.

Tom’s Take

I think keeping your tools capable and portable is a very good thing.  I hate software that can only be run from a Windows 2000 server or needs a special hardware dongle to even start.  I love that tools are becoming web-enabled and can be used from any PC/Mac/toaster.  However, I also think that things need to be kept in perspective.  BYOD is a Charlie Foxtrot just waiting to happen if the motivations behind it aren’t honest and sincere.  Simply porting your management app to the App Store so the CxO can show off his new iPad while complaining that we need to scrap the company website because it uses Flash and no one will bother using their dumb old laptop ever again is really, really bad.  Give me a compelling reason to use your app, like a new intuitive interface or a remote capability I wouldn’t normally have.  Just putting your tablet app out so you can sound cool or fit in with the popular crowd won’t work any better than wearing parachute pants did in high school.  Except, this time you won’t get stuffed into a locker.  You’ll just lose my business.