Dell Enterprise Forum and the VRTX of Change

I was invited by Dell to be a part of their first ever Enterprise Forum.  You may remember this event from the past when it was known as Dell Storage Forum, but now that Dell has a bevy of enterprise-focused products in their portfolio a name change was in order.  The Enterprise Forum still had a fair amount of storage announcements.  There was also discussion about networking and even virtualization.  One thing seemed to be on the tip of everyone’s tongue from the moment it was unveiled on Tuesday morning.

VRTX

Say hello to Dell’s newest server platform – VRTX (pronounced “vertex”).  The VRTX is a shift away from the centralized server clusters that you may be used to seeing from companies like Cisco, HP, or IBM.  Dell has taken their popular m1000 blade units and pulled them into an enclosure that bears more than a passing resemblance to the servers I deployed five or six years ago.  The VRTX is capable of holding up to 4 blade servers in the chassis alongside either 12 3.5″ hard drives or 25 2.5″ drives, for a grand total of up to 48 TB of storage space.  What sets VRTX apart from other similar designs, like the IBM S-class BladeCenter of yore, is the ability for expansion.

Rather than just sliding a quad-port NIC into the mezzanine slot and calling it a day, Dell developed VRTX to expand to meet future needs of customers.  That’s why you’ll find 8 PCIe slots in VRTX (3 full height, 5 half height).  That’s the real magic in this system.  For example, the VRTX ships today with 8 1GbE ports for network connectivity.  While 10GbE is slated for a future release you could slide in a 10GbE PCIe card and attach it to a blade if needed to gain connectivity.  You could also put in a Serial Attached SCSI (SAS) Host Bus Adapter (HBA) and gain more expansion for your on-board storage.  In the future, you could even push that to 40GbE or maybe one of those super fast PCIe SSD cards from a company like Fusion-IO.  The key is that the PCIe slots give you a ton of expandability in such a small form factor instead of limiting you to whatever mezzanine card or expansion adapter has been blessed by the skunkworks labs for your supplying server vendor.

VRTX doesn’t come without a bit of controversy.  Dell has positioned this system as a remote office/branch office (ROBO) solution that combines everything you would need to turn up a new site into one shippable unit.  That follows along with comments made at a keynote talk on the third day about Dell believing that compute power has reached a point where it will no longer grow at the same rate.  Dell’s solution to the issue is to push more compute power to the edge instead of centralizing it in the data center.  What you lose in manageability you gain in power.

The funny thing for me was looking at VRTX and seeing the solution to a small scale data center problem I had for many years.  The schools I used to serve didn’t need an 8 or 10-slot blade chassis.  They didn’t need two Compellent SANs with data tiering and failover.  They needed a solution to virtualize their aging workloads onto a small box built for their existing power and cooling infrastructure.  VRTX fits the bill just fine.  It uses 110v power.  The maximum of four blades fits just perfectly with VMware‘s Essentials bundle for cheap virtualization with the capability to expand if needed later on.  Everything is the same as the enterprise-grade hardware that’s being used in other solutions, just in a more SMB-friendly box.  Plus, the entry level price target of $10,000 in a half-loaded configuration fits the budget conscious needs of a school or small office.

If there is one weakness in the first iteration of VRTX it comes from the software side of things.  VRTX doesn’t have any software beyond what you load on it.  It will run VMware, Citrix, Hyper-V, or any manner of server software you want to install.  There’s no software to manage the platform, though.  Without that, VRTX is a standalone system.  If you truly wanted to use it as a “pay as you grow” data center solution, you need to find a way to expand the capabilities of the system linearly as you expand the node count.  As a counterpoint to this, take a look at Nutanix.  Many storage people at Enterprise Forum were calling the VRTX the “Dell Nutanix” solution.  You can watch an overview of what Nutanix is doing from a session at Storage Field Day 2 last November:

The key difference is that Nutanix has a software management program that allows their nodes to scale out when a new node is added.  That is what Dell needs to work on developing to harness the power that VRTX represents.  Dell developed this as a ROBO solution yet no one I talked to saw it that way.  They saw this as a building block for a company starting their data center build out.  What’s needed is the glue to stitch two or more VRTX systems together.  Harnessing the power of multiple discrete compute units is a very important part of breaking through all the barriers discussed at the end of Enterprise Forum.


Tom’s Take

Bigger is better.  Except when it’s not.  Sometimes good things really do come in small packages.  Considering that Dell’s VRTX was a science project for the last four years being built as a proof-of-concept I’d say that Dell has finally achieved one thing they’ve been wanting to do for a while.  It’s hard to compete against HP and IBM due to their longevity and entrenchment in the blade server market.  Now, Dell has a smaller blade server that customers are clamoring to buy to fill needs that aren’t satisfied by bigger boxes.  The missing ingredient right now is a way to tie them all together.  If Dell can mulitplex their resources together they stand an excellent chance of unseating the long-standing titans of blade compute.  And that’s a change worth fighting for.

Disclaimer

I was invited to attend Dell Enterprise Forum at the behest of Dell.  They paid for my travel and lodging expenses while on site in San Jose.  They also provided a Social Media Influencer pass to the event.  At no time did they place any requirements on my attendance or participation in this event.  They did not request that any posts be made about the event.  They did not ask for nor where they granted any kind of consideration in the writing of this or any other Dell Enterprise Forum post.

The Arse First Method of Technical Blogging – Review

When you tell people that you are a blogger, you tend to get a couple of generic responses.  The first is laughter or dismissal.  Some people just don’t understand how you can write all the time.  The second response if curiosity.  Usually, this is expressed as a torrent of questions about how to blog.  What do I write about?  How much should I write? How often should I post? And on and on.  For those of us that have been blogging long enough, it’s almost a wrote recitation of our standards and practices for blogging.  Some people have even been smart enough to turn that standard reply into a blog post.  For Greg Ferro, it was time to turn that blog post into an e-book:

ArseFirstCover

Cheeky, isn’t it? Weighing in at a svelte 37 pages, this little how-to guide details many of Greg’s secrets for writing blog posts over his career.  He talks about tools for screen captures and knowledge archiving.  He also discusses hosting options and content creation.  To the novice blogger, it’s a step-by-step guide in how to get started in blogging.  I would highly recommend picking it up if you aren’t sure how to get started in technical blogging, which is remarkably different than blogging about food or pictures or any other non-technical thing.

The Catch

The funny thing about this book is that, while reading more and more of it, I realized that I violate almost every one of Greg’s recommendations for writing a technical blog.  My opening paragraphs are more like story hooks.  I don’t use a lot of bullet points.  I like putting pictures in my posts.  There are many others that I ignore on a pretty regular basis as well.  But don’t think that means that I don’t appreciate what Greg is trying to do with his book.

Greg writes like he speaks in real life.  He doesn’t mince words.  He’s not in love with the sound of his voice.  He’s going to give it to you straight when you ask him a question.  His blogging style is totally reflective of his speaking style.  On the other hand, my blogging style is indicative of my speaking style as well.  I like telling stories and relating things back to universal images through metaphors.  I tend to expound on subjects and give more details to support my arguments rather than restricting that to a simple bulleted statement. People that read Greg’s blog posts and my blog posts would likely be able to pick out which of us authored a particular post.  That’s because we have our own voices.

Greg’s book is a great way to get started with technical blogging.  After you get your first couple of posts down, it’s important to think about finding your voice.  You may like using lots of pictures or video.  You may prefer to keep it short and sweet with the occasional code example.  The key is find a style that works for you and stick with it.  Once you find a comfortable writing style you’ll find yourself writing more often and about more complex subjects.  When you aren’t worried about getting the words down on paper you’re free to dive right into things that are going to take a lot of thought.

The recommended price of this book is $4.99.  If that scares you off, you can pick it up for just $2.99.  For the price of a candy bar and a 20oz soda, you can learn a little more about blogging and using tools to amplify your writing ability.  If nothing else, you can read through it so you know how Greg thinks when he’s writing down information about things.  You can purchase The Arse First Method of Technical Blogging at https://leanpub.com/Technical-Blogging-Writing-Arse-First.  I promise you won’t be disappointed.

Juniper Networks Warrior – Review

Documentation is the driest form of communication there is. Whether it be router release notes or stereo instructions I never seem to be able to find a way to read more than a paragraph before tossing things aside. You’d think by now that someone would come up with a better way to educate without driving someone to drinking.

O’Reilly Media has always done a good job of creating technical content that didn’t make me pass out from boredom. They’ve figured out how to strike a balance between what needs to be said and the more effective and entertaining way to say it. Once I started reading the books with the funny animals on the covers I started learning a lot more about the things I was working on. One book in particular caught my eye – Network Warrior by Gary Donahue. Billed as “everything you need to know that wasn’t on the CCNA,” it is a great introduction to more advanced topics that are encountered in day-to-day network operations like spanning tree or the Catalyst series of switches. Network Warrior is heavily influenced by Cisco equipment. While the concepts are pretty straight forward the bias does lean toward the building on Tasman Drive. Thankfully, O’Reilly enlisted an author to bring the Warrior series to Sunnyvale as well:

Screen Shot 2013-05-13 at 2.53.13 PM

Peter Southwick was enlisted to write a Warrior book from the perspective of Juniper engineer. I picked up a copy of this book the last time I was at Juniper’s headquarters and have spent the past few weeks digesting the info inside.

What Worked

Documentation is boring. It’s a dry description of how to do everything. How-to guides are a bit better written, but they still have to cover the basics. I am a much bigger fan of the cookbook, which is a how-to that takes basic building blocks and turns them into a recipe that accomplishes something. That’s what Juniper Networks Warrior is really about. It’s a cookbook with some context. Each of the vignettes tells a story about a specific deployment or project. By providing a back story to everything you get a feel for how real implementations tend to flow back and forth between planning and execution. Also, the solutions provided really do a great job of cutting past the boring rote documentation and into things you’ll use more than once. Couple that with the vignettes being based on something other than technology-focused chapters and it becomes apparent that this is a very holistic view for technology implementation.

What Didn’t Work

There were a couple of things that didn’t work well in the narrative to me. The first was the “tribe” theme. Southwick continually refers to the teams that he worked with in his projects as “tribes.” While I understand that this does fit somewhat with the whole idea behind the Warrior books, it felt a bit out of place. Especially since Donahue didn’t use it in either Network Warrior or Arista Warrior (another entry in the series). I really did try to look past it and not imagine groups of network engineers carrying spears and slings around the data center, but it was mentioned so often in place of “team” or “group” that it became jarring after a while.

The other piece that bothered me a bit was in Chapter 3: Data Center Security Design. The author went out of the way to mention that the solution that his “tribe” came up with was in direct competition with a competing one that utilized Cisco gear. He also mentioned that the Juniper solution was going to displace the Cisco solution to a certain degree. I get that. Vendor displacement happens all the time in the VAR world. What bothered me was the few occasional mentions of a competitor’s gear with words like “forced” or casting something in a negative light simply due to the sticker on the front. I’ve covered that before in my negative marketing post. Why I bring it up here is because it wasn’t present in either Network or Arista Warrior, even though the latter is a vendor-sponsored manual like this one. In particular, an anecdote in the Arista chapter on VRRP mentions that Cisco wanted to shut down the RFC for VRRP due to similarity with HSRP. No negativity, no poking with a sharp stick. Just a statement of fact and the readers are left to draw their own conclusions.

I realize the books of this nature often require input from the technical resources of a vendor. I also realize that sometimes the regard that these books are held in sometimes looks to be a very appealing platform to launch marketing campaigns or to use a factually based volume to mention some opinion-based verbiage. I sincerely hope that future volumes tone down the rhetoric just a bit for the sake of providing a good reference volume. Engineers will keep going back to a book if it gives them a healthy dose of the information they need to do their jobs. They won’t go back nearly as often to a book that spends too much time discussing the pros and cons of a particular vendor’s solution. I’d rather see pages of facts and configs that get the job done.

Review Disclaimer

The copy of Juniper Networks Warrior that I reviewed was provided to me by Juniper Networks. I received it as part of a group of items during Network Field Day 5. At no time did Juniper ask for nor were they promised any consideration in the writing of this review. All of the analysis and conclusions contained herein are mine and mine alone.

Tweebot For Mac – The Only Client You Need

TweetbotBirdI live my day on Twitter.  Whether it be learning new information, sharing information, or having great conversations I love the interactions that I get.  Part of getting the most out of Twitter comes from using a client that works to present you with the best experience.  Let me just get this out of the way: the Twitter web interface sucks.  It’s clunky and expends way too much real estate to provide a very minimal amount of useful information.  I’m constantly assaulted with who I should be following, what’s trending, and who is paying for their trends to float to the top of the list.  I prefer to digest my info a little bit differently.

You may recall that when I used Windows I was a big fan of the Janetter app.  When I transitioned to using a Mac full time, I started using Janetter at first to replicate my workflow.  I still kept my eyes open for a more streamlined client that I could keep on my desktop in the background.  While I loved the way the Mac client from Twitter (nee Tweetie) displayed things, I knew that development on that client had all but ended when Loren Britcher left Twitter.  Thankfully, Mark Jardine and Paul Haddad had been busy in the mad science lab to save me.

I downloaded Tweetbot for iOS back when I used an iPhone 3GS.  I loved the interface, but the program was a bit laggy on my venerable old phone.  When I moved to an iPhone 4S, I started using Tweetbot all the time.  This was around the time that Twitter decided to start screwing around with their mobile interface through things like the Dickbar.  Tweetbot on my phone was streamlined.  It allowed me to use gestures to see conversations.  I could see pictures inline and quickly tap links to pull up websites.  I could even send those links to mobile Safari or Instapaper as needed.  It fit my workflow needs perfectly.  It met them so well that I spent most of my time checking Twitter on my phone instead of my desktop.

The wiz kids at Tapbots figure out that a client for Mac was one of their most requested features.  So the got cooking on it.  They released an alpha for us to break and test the living daylights out of.  I loved the alpha so much I immediately deleted all other clients from my Mac and started using it no matter how many undocumented features I had to live through.  I used the alpha/beta clients all the way up to the release.  The same features I loved from the mobile client were there on my desktop.  It didn’t take up tons of room on a separate desktop.  I could use gestures to see conversations.  They even managed to add new features like multi-column support to mimic one of Tweetdeck’s most popular features.  When I found that just before NFD4, I absolutely fell in love.

TweetbotMacShot

Tweetbot is beautiful.  It is optimized for retina displays on the new MacBooks, so when you scale it up to HiDPI (4x resolution) it doesn’t look like pixelated garbage.  Tweets can be streamed to the client so you don’t constantly have to pull down to refresh your timeline.  I can pin the timeline to keep up with my tweeps at my leisure instead of the client’s discretion.  I even have support within iCloud to keep my mobile Tweetbot client synced to the position of my desktop client and vice versa.  If I read tweets on my phone, my timeline position is updated when I get back to my desk.  I think that almost every feature that I need from Twitter is represented here without the fluff of promoted tweets or ads that don’t apply to me.

That’s not to say that all this awesomeness doesn’t come without a bit of bad news.  If you hop on over to the App Store, you’re going to find out that Tweetbot for Mac costs $20 US. How can a simple Twitter client cost that much?!?  The key lies in the changes to Twitter’s API in version 1.1.  Twitter has decided that third party clients are the enemy.  All users should be using the website or official clients to view things.  Not coincidentally, the website and official clients also have promoted tweets and trends injected into your timeline.  Twitter wants to monitize their user base in the worst way.  I’m sure it’s because they see Mark Zuckerberg sitting on a pile of cash at Facebook and want the same thing for themselves.  They key to that is controlling the user experience.  If they can guarantee that users will see ads they can charge a hefty fee to advertisers.  The only way to ensure that users see those ads is via official channels.  That means that third party clients like Tweetbot can’t be allowed to exist.

In order to lock the clients out without looking like they are playing favorites, a couple of changes were put in place.  First, non-official clients are limited to a maximum of 100,000 user tokens.  Once you hit your limit, you have to go back to Twitter and ask for more.  However, if Twitter determines that your client “replicates official features and offers no unique features,” you get the door slammed in your face and no more user tokens.  It’s already happened to one client.  If you don’t want to hit your limit too quickly, the only option is to make the price in the store much higher than the “casual” user is willing to pay.  As Greg Ferro (@etherealmind) likes to say, Tweetbot is “reassuringly expensive.”


Tom’s Take

I have a ton of apps on my phone and my MacBook that I’ve used once or twice. I paid the $.99 or $1.99 to test them out and found that they don’t meet my needs.  When Tweetbot was finally released, I didn’t hesitate to buy it even though it was $20.  As much as I use Twitter, I can easily justify the cost to myself.  I need a client that doesn’t get in my way. I want flexibility.  I don’t want the extra crap that Twitter is trying to force down my throat.  I want to use Twitter.  I don’t want Twitter to use me.  That’s what I get from Tweetbot.  I don’t need the metrics from Hootsuite.  I just want to read and respond to conversations and save articles for later.  Thanks to Twitter’s meddling, a lot of people have been looking for a replacement for the old Tweetdeck Air client that is getting sunsetted on May 7.  I can honestly say without reservation that Tweetbot for Mac is the replacement you’re looking for.

Review Disclaimer

I am a paying user of Tweetbot for iPhone, iPad, and Mac.  These programs were purchased by me.  This review was written without any prior contact with Tapbots.  They did not solicit any of the content or ask for any consideration in the writing of this article.  The conclusions and analysis herein are mine and mine alone.

VMware Partner Exchange 2013

VMwarePEXTitle

Having been named a vExpert for 2012, I’ve been trying to find ways to get myself invovled with the virtualization community. Besides joining my local VMware Users Group (VMUG), there wasn’t much success. That is, until the end of February. John Mark Troyer (@jtroyer), the godfather of the vExperts, put out a call for people interested in attending the VMware Partner Exchange in Las Vegas. This would be an all-expenses paid trip from a vendor. Besides going to a presentation and having a one-on-one engagement with them, there were no other restrictions about what could or couldn’t be said. I figured I might as well take the chance to join in the festivites. I threw my name into the hat and was lucky enough to get selected!

Most vendors have two distinctly different conferences througout the year. One is focused on end-users and customers and usually carries much more technical content. For Cisco, this is Cisco Live. For VMware, this is VMWorld. The other conference revolves around existing partners and resellers. Instead of going over the gory details of vMotions or EIGRP, it instead focuses on market strategies and feature sets. That is what VMware Partner Exchange (VMwarePEX) was all about for me. Rather than seeing CLI and step-by-step config guides to advanced features, I was treated to a lot of talk about differentiation and product placement. This fit right in with my new-ish role at my VAR that is focused toward architecture and less on post-sales technical work.

The sponsoring vendor for my trip was tried-and-true Hewlett Packard. Now, I know I’ve said some things about HP in the past that might not have been taken as glowing endoresements. Still, I wanted to look at what HP had to offer with an open mind. The Converged Application Systems (CAS) team specifically wanted to engage me, along with Damian Karlson (@sixfootdad), Brian Knudtson (@bknudtson), and Chris Wahl (@chriswahl) to observe and comment on what they had to offer. I had never heard of this group inside of HP, which we’ll get into a bit more here in a second.

My first real day at VMwarePEX was a day-long bootcamp from HP that served as an introduction to their product lines and how the place themselves in the market alongside Cisco, Dell, and IBM. I must admit that this was much more focused on sales and marketing than my usual presentation lineup. I found it tough to concentrate on certain pieces as we went along. I’m not knocking the presenters, as they did a great job of keeping the people in the room as focused as possible. The material was…a bit dry. I don’t think there was much that could have helped it. We covered servers, networking, storage, applications, and even management in the six hours we were in the session. I learned a lot about what HP had to offer. Based on my previous experiences, this was a very good thing. Once you feel like someone has missed on your expectations you tend to regard them with a wary eye. HP did a lot to fix my perception problem by showing they were a lot more than some wireless or switching product issues.

Definition: Software

I attended the VMwarePEX keynote on Tuesday to hear all about the “software defined datacenter.” To be honest, I’m really beginning to take umberage with all this “software defined <something>” terminology being bandied about by every vendor under the sun. I think of it as the Web 2.0 hype of the 2010s. Since VMware doesn’t manufacture a single piece of hardware to my knowledge, of course their view is that software is the real differentiator in the data center. Their message no longer has anything to do with convincing people that cramming twenty servers into one box is a good idea. Instead, they now find themsevles in a dog fight with Amazon, Citrix, and Microsoft on all fronts. They may have pioneered the idea of x86 virtualization, but the rest of the contenders are catching up fast (and surpassing them in some cases).

VMware has to spend a lot of their time now showing the vision for where they want to take their software suites. Note that I said “suite,” because VMware’s message at PEX was loud and clear – don’t just sell the hypervisor any more. VMware wants you to go out and sell the operations managment and the vCloud suite instead. Gone are the days when someone could just buy a single license for ESX or download ESXi and put in on a lab system to begin a hypervisor build-out. Instead, we now see VMware pushing the whole package from soup to nuts. They want their user base to get comfortable using the ops management tools and various add-ons to the base hypervisor. While the trend may be to stay hypervisor agnostic for the most part, VMware and their competitors realize that if you feel cozy using one set of tools to run your environment, you’ll be more likely to keep going back to them as you expand.

Another piece that VMware is really driving home is the idea of the hybrid cloud. This makes sense when you consider that the biggest public cloud provider out there isn’t exactly VMware-friendly. Amazon has a huge marketshare among public cloud providers. They offer the ability to convert your VMware workloads to their format. But, there’s no easy way back. According to VMware’s top execs, “When a customer moves a workload to Amazon, they lose. And we lose them forever.” The first part of that statement may be a bit of a stretch, but the second is not. Once a customer moves their data and operations to Amazon, they have no real incentive to bring it back. That’s what VMware is trying to change. They have put out a model that allows a customer to build a private cloud inside their own datacenter and have all the features and functionality that they would have in Reston, VA or any other large data center. However, through the use of magic software, they can “cloudburst” their data to a VMware provider/partner in a public cloud data center to take advantage of processing surplus when needed, such as at tax time or when the NCAA tournement is taxing your servers. That message is also clear to me: Spend your money on in-house clouds first, and burst only if you must. Then, bring it all back until you need to burst again. It’s difficult to say whether or not VMware is going to have a lot of success with this model as the drive toward moving workloads into the public cloud gains momentum.

I also got the chance to sit down with the HP CAS group for about an hour with the other bloggers and talk about some of the things they are doing. The CAS group seems to be focused on taking all the pieces of the puzzle and putting them together for customers. That’s similar to what I do in the VAR space, but HP is trying to do that for their own solutions instead of forcing the customer to pay an integrator to do it. While part of me does worry that other companies doing something similar will eventually lead to the demise of the VAR I think HP is taking the right tactic in their specific case. HP knows better than anyone else how their systems should play together. By creating a group that can give customers and integrators good reference designs and help us get past the sticky points in installation and configuration, they add a significant amount of value to the equation. I plan to dig into the CAS group a bit more to find out what kind of goodies they have that might make be a better engineer overall.


Tom’s Take

Overall, I think that VMwarePEX is well suited for the market that it’s trying to address. This is an excellent place for solution focused people to get information and roadmaps for all kinds of products. That being said, I don’t think it’s the place for me. I’m still an old CLI jockey. I don’t feel comfortable in a presentation that has almost no code, no live demos, or even a glory shot of a GUI tool. It’s a bit like watching a rugby game. Sure, the action is somewhat familiar and I understand the majority of what’s going on. It still feels like something’s just a bit out of place, though. I think the next VMware event that I attend will be VMWorld. With the focus on technical solutions and “nuts and bolts” detail, I think I’ll end up getting more out of it in the long run. I appreciate HP and VMware for taking the time to let me experience Partner Exchange.

Disclaimer

My attendance at VMware Parter Exchange was a result of a all expenses paid sponsored trip provided by Hewlett Packard and VMware. My conference attendance, hotel room, meals and incidentals were paid in full. At no time did HP or VMware propose or restrict content to be written on this blog. All opinions and analysis provided herein and on any VMwarePEX-related posts is mine and mine alone.

Juniper MX Series – Review

A year ago I told myself I needed to start learning Junos.  While I did sign up for the Fast Track program and have spent a lot of time trying to get the basics of the JNCIA down, I still haven’t gotten around to taking the test.  In the meantime, I’ve had a lot more interaction with Juniper users and Juniper employees.  One of those was Doug Hanks.  I met him at Network Field Day 4 this year.  He told me about a book that he had recently authored that I might want to check out if I wanted to learn more about Junos and specifically the MX router platform.  Doug was kind enough to send me an autographed copy:

MX Series Cover

The covers on O’Reilly books are always the best.  It’s like a zoo with awesome content inside.

This is not a book for the beginner.  Frankly, most O’Reilly press books are written for people that have a good idea about what they’re doing.  If you want to get your feet wet with Junos, you probably need to look at the Day One guides that Juniper provides free of charge.  When you’ve gone through those and want to step up to a more in-depth volume you should pick up this book.  It’s the most extensive, exhaustive guide to a platform that I’ve ever seen in a very long time.  This isn’t just an overview of the MX or a simple configuration guide.  This book should be shipped with every MX router that leaves Sunnyvale.  This is a manual for the TRIO chipset and all the tricks you can do on it.

The MX Series book does a great job of not only explaining what makes the MX and TRIO chipset different, but also how to make it perform at the top of its game.  The chapter on Class of Service (CoS) alone is worth its weight in gold.  That topic has worried me in the past because of other vendor’s simplified command line interfaces for Quality of Service (QoS).  This book spells everything out in a nice orderly fashion and makes it all make more sense than I’ve seen before.  I’m pretty sure those pages are going to get reused a lot as I start my journey down the path of Junos.  But just because the book make things easy to understand doesn’t mean that it’s shallow on technical knowledge or depth.  The config snippet for DDoS mitigation is fifteen pages long!  That’s a lot of info that you aren’t going to find in a day one guide.  And all of those chapters are backed up with case studies.  It’s not enough that you know how to configure some obscure command.  Instead, you need to see where to use it and what context makes the most sense.  That’s where these things hit home for me.  I was always a fan of word problems in math.  Simple formulas didn’t really hit home for me.  I needed an example to reinforce the topic.  This book does an outstanding job of giving me those case studies.


Tom’s Take

The Juniper MX Series book is now my reference point for what an deep dive tome on a platform should look like.  It covers the technology to a very exhaustive depth without ever really getting bogged down in the details.  If you sit down and read this cover to cover, you will come away with a better understanding of the MX platform that anyone else on the planet except perhaps the developers.  That being said, don’t sit down and read it all at once.  Take the time to go into the case studies and implement them on your test lab to see how the various features interact together.  Use this book as an encyclopedia, not as a piece of fireside reading material.  You’ll thank yourself much later when you’re not having dreams of CoS policies and tri-color policers.

Disclaimer

This copy of Juniper MX Series was provided to me at no charge by Doug Hanks for the purpose of review.  I agreed with Doug to provide an unbiased review of his book based on my reading of it.  There was no consideration given to him on the basis of providing the book and he never asked for any when providing it.  The opinions and analysis provided in this review reflect my views and mine alone.

WordAds – My Time in Advertising

A few of you probably notice that I started running ads on this blog a while back, say around February.  I also recently turned them off two weeks ago.  I wanted to give you all a little background into what went on with the WordPress WordAds program that I ran for a bit.

This blog is hosted by WordPress.com.  That means that they control all the admin stuff like code updates and server locations.  All I do is log in and write.  This is great for people that don’t really care about the dirty stuff under the hood and would rather spend their constructive time writing.  That’s what I wanted to do for the most part.  Sure, I miss out on all the cooler things, like using Disqus for my comments or hosting other plugins, but all in all I am very happy with the service provided by WordPress.  The major thing that people will tell you that you’re missing out on with a hosted solution is advertising.  WordPress reserves the right to run some advertisements on your blog when you hit a certain traffic level.  Beyond that, there won’t be any ads on the site if you are hosted by WordPress.  That is, until the advent of the WordAds program.

WordAds is a program designed to allow WordPress-hosted blogs that meet certain criteria to run some limited advertisements.  There aren’t many requirements, other than you must be a publicly visible blog with a custom domain name, such as networkingnerd.net as opposed to networkingnerd.wordpress.com.  Since I met the criteria, I jumped in and got setup for WordAds.  This was mostly as a trial run, as I knew that I wasn’t going to make enough money out of my little experiment to quit my day job and become a globe-trotting playboy.  I hoped to collect a bit of money and use it to do something like pay for additional WordPress upgrades or maybe even move to a self-hosted solution at some point down the road.

The setup for WordAds is fairly easy.  Once you’ve indicated your interest in the program and you’ve been vetted by WordPress, all you need to do is log into your control panel and check a box to display your ads.  You can choose to display ads to all your visitors or just the ones that aren’t logged into WordPress.  I set mine up to display to all users.  Once I had selected my ad impression categories, which were a meager list of technology and geeky-type stuff, I turned everything on and began my grand experiment.  The first thing that I noticed is that you aren’t going to get immediate feedback.  It took a month before WordPress reported my earnings, and they only really updated the data once a week or so.  I knew that my coffers weren’t going to be filling up like Scrooge McDuck’s money bin, but a little more real-time feedback or the option to pull that information from a mouse click might have been nice.  The other thing that irked me is that I didn’t have a lot of control over the ads that played.  I tried to keep it to something my audience wouldn’t mind seeing, but it seems that the advertising network had other ideas.  The primary reason that I pulled the whole thing down was that there was an annoying ad for a vehicle that keep auto-playing on rollover and blasting my readers with annoying sound.  Since my readers are my greatest asset, and since I don’t want any of you showing up on my doorstep to punch me for annoying the daylights out of you, I decided to pull down the ads.

The payout structure for WordAds involves PayPal, which isn’t a huge deal since almost everyone that has ever bought anything online probably has a PayPal account at this point.  The kicker is that the payout threshhold is $100 US.  They won’t cut you a check for your earnings until you’ve hit the magic tipping point.  Right now, after about five months of running ads on my blog, I haven’t even hit $50 yet.  My first month, I made a whole $2.  All that’s good for is getting the paperboy off your back.  I know that based on the amount of traffic that I get living off my advertising wasn’t a realistic goal.  I also know how often I tend to click on banner ads, so again since most of my readers are smarter than I am I knew they weren’t likely to click on the banners either.  Instead, I figured I’d just let the ads sit there until I could pull the money out and use it for upgrades.  At this rate, I’ll probably run out of boring things to say before I get to that point.  Instead, I’ve decided to turn off the ads and go back to what I do best – writing boring pieces about CallManager or taunting the NAT folks.  I’m not worried about making any money off of this whole thing.  The little bit that I do have can go back to WordPress for them to buy a round of coffee for the operations team that keeps my blog from crashing every now and then.  If I’m really that concerned about sponsors, I suppose I can start wearing a jumpsuit to work festooned with patches like racing drivers.  Now I just need to work out my rates for that.

OS X 10.8 Mountain Lion – Review

Today appears to be the day that the world at large gets their hands on OS X 10.8, otherwise known as Mountain Lion. The latest major update in the OS X cat family, Mountain Lion isn’t so much a revolutionary upgrade (like moving from Snow Leopard to Lion) as opposed to an evolutionary one (like moving from Leopard to Snow Leopard). I’ve had a chance to use Mountain Lion since early July when the golden master (GM) build was released to the developer community. What follows are my impressions about the OS from a relatively new Mac user.

When you start your Mountain Lion machine for the first time, you won’t notice a lot that’s different from Lion. That’s one of the nicer things about OS X. I don’t have to worry that Apple is going to come out with some strange AOL-esque GUI update just around the corner. Instead, the same principles that I learned in Lion continue here as well. In lieu of a total window manager overhaul, a heavy coat of polish has been applied everywhere. Most of the features that are listed on the Mountain Lion website are included and likely not to be used by me that much. Instead, there are a few little quality of life (QoL) things that I’ve noticed. Firstly, Lion originally came with the dock indicator for open programs disabled. Instead of a little light telling you that Safari and Mail were open, you saw nothing. This spoke more to the capability introduced that reopened the windows that were open when you closed the program. Apple would rather you think less about a program being open or closed and instead on what programs you wanted to use to accomplish things. In Mountain Lion, the little light that indicates an open program has shrunk to a small lighted notch on the very bottom of the dock below an open program. It’s now rather difficult to determine which programs are open with a quick glance. Being one of those people that is meticulous about which programs I have open at any one time, this is a bit of step in the wrong direction. I don’t mind that Apple has changed the default indicator. Just give me an option to put the old one back.

My Mountain Lion Dock with the new open program indicators

Safari

Safari also got an overhaul. One of the things I like the most about Chrome is the Omnibox. The ability to type my searches directly into the address bar saves me a step, and since my job sometimes feels like the Chief Google Search Engineer, saving an extra step can be a big help. Another feature is the iCloud button. iCloud can now sync open tabs on your iPhone/iPad/iPod/Mountain Lion system. This could be handy for someone that opens a website on their mobile device but would like to look at it on a full-sized screen when they get to the office. Not a groundbreaking feature, but a very nice one to have. The Reading List feature is still there as well from the last update, but being a huge fan of Instapaper, I haven’t really tested it yet.

Dictation

Another new feature is dictation. Mountain lion has included a Siri like dictation feature in the operating system that allows you to say what you want rather than typing it out. Make no mistake though. This isn’t Siri. This is more like the dictation feature from the new iPad. Right now, it won’t do much more than regurgitate what you say. I’m not sure how much I’ll use this feature going forward, as I prefer to write with the keyboard as opposed to thinking out loud. Using the dictation feature does make it much more accurate, as the system learns your accent and idiosyncrasies to become much more adapt over time. If you’d like to get a feel for how well the dictation feature works, (the paragraph)

You’ve been reading was done completely by the dictation feature. I’ve left any spelling and grammar mistakes intact to give you a realistic picture. Seriously though, the word paragraph seems to make the dictation feature make a new paragraph.

Gatekeeper

I did have my first run-in with Gatekeeper about a week after I upgraded, but not for the reasons that I thought I would.  Apple’s new program security mechanism is designed to prevent drive-by downloads and program installations like the ones that embarrassed Apple as of late.  Gatekeeper can be set to allow only signed applications from the App Store to be installed or run on the system.  This gives Apple the ability to not only protect the non-IT savvy populace at large from malicious programs, but also gives Apple the ability to program a remote kill switch in the event that something nasty slips past the reviewers and starts wreaking havoc.  Yes, there have been more nefarious and sinister prognostications that Apple will begin to limit apps to only being able to be installed through the App Store or that Apple might flip the kill switch on software they deem “unworthy”, but I’m not going to talk about that here.  Instead, I wanted to point out the issue that I had with Gatekeeper.  I use a networking monitoring system called N-Able at work that gives me the ability to remote into systems on my customer’s networks.  N-Able uses a Java client to establish this remote connection, whether it be telnet, SSH, or RDP.  However, after my upgrade to Mountain Lion, my first attempt to log into a remote machine was met with a Java failure.  I couldn’t bypass the security warning and launch the app from a web browser to bring up my RDP client.  I checked all the Java security settings that got mucked with after the Flashback fiasco, but they all looked clean.  After a Google Glance, I found the culprit was Gatekeeper.  The default permission model allows Mac App Store apps to run as well as those from registered developers.  However, the server that I have running N-Able uses a self-signed certificate.  That evidently violates the Gatekeeper rules for program execution.  I changed Gatekeeper’s permission model to allow all apps to run, regardless of where the app was downloaded from.  This was probably something that would have needed to be done anyway at some point, but the lack of specific error messages pointing me toward Gatekeeper worried me.  I can foresee a lot of support calls in the future from unsuspecting users not understanding that their real problem isn’t with the program they are trying to open, but with the underlying security subsystem of their Mac instead.

Twitter Integration

Mountain Lion has also followed the same path as it’s mobile counterpart and allowed Twitter integration into the OS itself. This, to me, is a mixed bag. I’m a huge fan of Twitter clients on the desktop. Since Tapbots released the Tweetbot Alpha the same day that I upgraded to Mountain Lion, I’ve been using it as my primary communication method with Twitter. The OS still pops up an update when I have a new Twitter notification or DM, so I see that window before I check my client. The sharing ability in the OS to tweet links and pictures is a nice time saver, but it merely saves me a step of copying and pasting. I doubt I’m any more likely to share things with the new shortcuts as I was before. The forthcoming Facebook integration may be more to my liking. Not because I use Facebook more than I use Twitter. Instead, by having access to Facebook without having to open their website in a browser, I might be more motivated to update every once in a while.

AirPlay

I had a limited opportunity to play with AirPlay in Mountain Lion.  AirPlay, for those not familiar, is the ability to wirelessly stream video or audio from some device to receiver.  As of right now, the only out-of-the box receiver is the Apple TV.  The iPad 2 and 3 as well as the iPhone 4S have the capability to stream audio and video to this device.  Older Macs and mobile devices can only stream audio files, ala iTunes.  In Mountain Lion, however, any newer Mac running an i-Series processor can mirror their screen to an Apple TV (or other AirPlay receiver, provided you have the right software installed).  I tested it, and everything worked flawlessly.  Mountain Lion uses Bonjour to detect that a suitable AirPlay receiver is on the network, and the AirPlay icon appears in the notification area to let you know you can mirror your desktop over there.  The software takes care of sizing your desktop to an HD-friendly resolution and away you go.  There was a bit of video lag on the receiver, but not on the Mountain Lion system itself, so you could probably play games if you wanted, provided your weren’t relying on the AirPlay receiver as your primary screen.  For regular things, like presentations, everything went smooth.  The only part of this system that I didn’t care much for is the mirroring setup.  While I understand the idea behind AirPlay is to allow things like movies to be streamed over to an Apple TV, I would have liked the ability to attach an Apple TV as a second monitor input.  That would let me do all kinds of interesting things.  First and foremost, I could use the multi-screen features in Powerpoint and Keynote as they were intended to be used.  Or I could use AirPlay with a second HDMI-capable monitor to finally have a dual monitor setup for my MacBook Air.  But, as a first generation desktop product, AirPlay on Mountain Lion does some good things.  While I had to borrow the Apple TV that I used to test this feature, I’m likely to go pick one up just to throw in my bag for things like presentations.


Tom’s Take

Is Mountain Lion worth the $20 upgrade price? I would say “yes” with some reservations. Having a newer kernel and device drivers is never a bad thing. Software will soon require Mountain Lion to function, as in the case of the OS X version of Tweetbot when it’s finally released. The feature set is tempting for those that spend time sharing on Twitter or want to use iCloud to sync things back and forth. Notification Center is a plus for those that don’t want popup windows cluttering everything. If you are a heavy user of presentation software and own an AppleTV, the Airplay mirroring may be the tipping point for you. Overall, compared to those that paid much more for more minor upgrades, or paid for upgrades that broke their system beyond belief (I’m looking at you, Windows ME), upgrading to Mountain Lion is painless and offers some distinct advantages. For the price of a nice steak, you can keep the same performance you’ve had with your system running Lion and get some new features to boot. Maybe this old cougar can keep running a little while longer.

Cisco Unified Communications Manager 8: Expert Administration Cookbook – Review

When you spend as much time configuring Cisco Unified Communications Manager (CUCM) servers as I do, you do one of two things.  Either you spend a lot of time reading through documentation, or you write down the important steps as concisely as possible for later use.  Documentation has uses.  When you are first learning something or you need the explanation for exactly what a partition does, documentation is your best friend.  However, when you’ve configured a ton of servers already and know the basics cold, wading through page upon page of prose to find the missing parameter of your Automated Alternate Routing (AAR) configuration is time consuming and frustrating.  If only there was some book that you could keep with you that has the basic configurations spelled out in short snippets.  A book that would allow you to quickly look up a function or feature and get it up and running without a fifteen page lead-in.  Thankfully, such a book does exist:

Tanner Ezell (@tannerezell) does a great job of condensing the mountain of documentation that Cisco has produced to support CUCM into 285 pages of tips and tricks on configuring important features that you’ll run across every day.  Unlike the Cisco Press CUCM guide I reviewed previously, Tanner’s book doesn’t step through the details of configuring a partition or a calling search space (CSS) for the first time.  Instead, this book assumes that you are a professional that has done tasks like that many, many times before.  Instead, this book concentrates on some of the newer features in CUCM 8 that may or may not be something that the reader has configured before.  Things like E.164 normalized dialing using the “+” symbol or Cross-Cluster Extension Mobility.  In fact, after reading the first three recipes in the book, I configured plus-dialing on my production cluster with no fuss.  That’s not something I was comfortable doing after reading through the tome of configuration on Cisco’s website or in the Solution Reference Network Design (SRND) document.

Think of this book as a reference guide for the 20% of features that you may configure once or twice every six months.  Sure, I can create a North American Numbering Plan (NANP) route pattern list in my sleep.  However, when it comes time for me to configure AAR or setup the Real Time Monitoring Tool (RTMT) to email me when something breaks, I’m going to have to look up how to do that.  Now, all I need to do is flip open this book to the appropriate chapter and get right to work without using CTRL + F to sort through to what I need to know.

Tom’s Take

CUCM 8 Expert Administration Cookbook was a pretty quick read for me.  That’s because I’ve seen many of the things in here before.  The problem is that I don’t remember them since they aren’t things I do every day.  It’s nice to know that I have a good reference book that I can rely on to help me in those times of need when I have to have a feature up and running quickly and my mind has gone totally blank on it.  I commend Tanner Ezell for taking the time to boil the feature configuration down to the bare necessities needed to get everything operational and then put it into printed form for us to enjoy.  I’m sure that my copy of this book is going to be well worn for many deployments to come.

Review Disclaimer

The copy of CUCM 8: Expert Administration Cookboook that was reviewed was purchased by me from Amazon.  It was not provided by the publisher.  As such, neither the publisher nor the author were granted any consideration in the writing of this review.  The opinions and analysis contained herein are mine and mine alone.

Automating vSphere with VMware vCenter Orchestrator – Review

I’ll be honest.  Orchestration, to me, is something a conductor does with the Philharmonic.  I keep hearing the word thrown around in virtualization and cloud discussions but I’m never quite sure what it means.  I know it has something to do with automating processes and such but beyond that I can’t give a detailed description of what is involved from a technical perspective.  Luckily, thanks to VMware Press and Cody Bunch (@cody_bunch) I don’t have to be uneducated any longer:

One of the first books from VMware Press, Automating vSphere with VMware vCenter Orchestrator (I’m going to abbreviate to Automating vSphere) is a fine example of the type of reference material that is needed to help sort through some of the more esoteric concepts surrounding virtualization and cloud computing today.  As I started reading through the introduction, I knew immediately that I was going to enjoy this book immensely due to the humor and light tone.  It’s very easy to write a virtualization encyclopedia.  It’s another thing to make it readable.  Thankfully, Cody Bunch has turned what could have otherwise been a very dry read into a great reference book filled with Star Trek references and Monty Python humor.

Coming in at just over 200 pages with some additional appendices, this book once again qualifies as “pound cake reading”, in that you need to take your time and understand that length isn’t the important part, as the content is very filling.  The author starts off by assuming I know nothing about orchestration and filling me in on the basics behind why vCenter Orchestrator (vCO) is so useful to overworked server/virtualization admins.  The opening chapter makes a very good case for the use of orchestration even in smaller environments due to the consistency of application and scalability potential should the virtualization needs of a company begin to increase rapidly.  I’ve seen this myself many times in smaller customers.  Once the restriction of one server to one operating system is removed, virtualized servers soon begin to multiply very quickly.  With vCO, managing and automating the creation and curation of these servers is effortless.  Provided you aren’t afraid to get your hands dirty.  The rest of Part I of the book covers the installation and configuration of vCO, including scenarios where you want to split the components apart to increase performance and scalability.

Part II delves into the nuts and bolts of how vCO works.  Lots of discussions about workflows that have containers that perform operations.  When presented like this, vCO doesn’t look quite as daunting to an orchestration rookie.  It’s important to help the new people understand that there really isn’t a lot of magic in the individual parts of vCO.  The key, just like a real orchestra, is bringing them together to create something greater than the sum of its parts.  The real jewel of the book to me was Part III, as case study with a fictional retail company.  Case studies are always a good way to ground readers in the reality and application of nebulous concepts.  Thankfully, the Amazing Smoothie company is doing many of the things I would find myself doing for my customers on a regular basis. I enjoyed watching the workflows and Javascript come together to automate menial tasks like consolidating snapshots or retiring virtual machines.  I’m pretty sure that I’m going to find myself dog-earing many of the pages in this section in the future as I learn to apply all the nuggets contained within to real life scenarios for my own environment as well as that of my customers.

If you’d like to grab this book, you can pick it up at the VMware Press site or on Amazon.


Tom’s Take

I’m very impressed with the caliber of writing I’m seeing out of VMware Press in this initial offering.  I’m not one for reading dry documentation or recitation of facts and figures.  By engaging writers like Cody Bunch, VMware Press has made it enjoyable to learn about new concepts while at the same time giving me insight into products I never new I needed.  If you are a virtualization admin that manages more than two or three servers, I highly recommend you take a peak at this book.  The software it discusses doesn’t cost you anything to try, but the sheer complexity of trying to configure it yourself could cause you to give up on vCO without a real appraisal of its capabilities.  Thanks to VMware Press and Cody Bunch, the amount of time and effort you save from buying this book will easily be offset by gains in productivity down the road.

Book Review Disclaimer

A review copy of Automating vSphere with VMware vCenter Orchestrator was provided to me by VMware Press.  VMware Press did not ask me to review this book as a condition of providing the copy.  VMware Press did not ask for nor were they promised any consideration in the writing of this review.  The thoughts and opinions expressed herein are mine and mine alone.