The Cargo Cult of Google Tools

You should definitely watch this amazing video from Ben Sigelman of LightStep that was recorded at Cloud Field Day 4. The good stuff comes right up front.

In less than five minutes, he takes apart crazy notions that we have in the world today. I like the observation that you can’t build a system more than three or four orders of magnitude. Yes, you really shouldn’t be using Hadoop for simple things. And Machine Learning is not a magic wand that fixes every problem.

However, my favorite thing was the quick mention of how emulating Google for the sake of using their tools for every solution is folly. Ben should know, because he is an ex-Googler. I think I can sum up this entire discussion in less than a minute of his talk here:

Google’s solutions were built for scale that basically doesn’t exist outside of a maybe a handful of companies with a trillion dollar valuation. It’s foolish to assume that their solutions are better. They’re just more scalable. But they are actually very feature-poor. There’s a tradeoff there. We should not be imitating what Google did without thinking about why they did it. Sometimes the “whys” will apply to us, sometimes they won’t.

Gee, where have I heard something like this before? Oh yeah. How about this post. Or maybe this one on OCP. If I had a microphone I would have handed it to Ben so he could drop it.

Building a Laser Moustrap

We’ve reached the point in networking and other IT disciplines where we have built cargo cults around Facebook and Google. We practically worship every tool they release into the wild and try to emulate that style in our own networks. And it’s not just the tools we use, either. We also keep trying to emulate the service provider style of Facebook and Google where they treated their primary users and consumers of services like your ISP treats you. That architectural style is being lauded by so many analysts and forward-thinking firms that you’re probably sick of hearing about it.

Guess what? You are not Google. Or Facebook. Or LinkedIn. You are not solving massive problems at the scale that they are solving them. Your 50-person office does not need Cassandra or Hadoop or TensorFlow. Why?

  • Google Has Massive Scale – Ben mentioned it in the video above. The published scale of Google is massive, and even it’s on the low side of the number. The real numbers could even be an order of magnitude higher than what we realize. When you have to start quoting throughput numbers in “Library of Congress” numbers to make sense to normal people, you’re in a class by yourself.
  • Google Builds Solutions For Their Problems – It’s all well and good that Google has built a ton of tools to solve their issues. It’s even nice of them to have shared those tools with the community through open source. But realistically speaking, when are you really going to use Cassandra to solve all but the most complicated and complex database issues? It’s like a guy that goes out to buy a pneumatic impact wrench to fix the training wheels on his daughter’s bike. Sure, it will get the job done. But it’s going to be way overpowered and cause more problems than it solves.
  • Google’s Tools Don’t Solve Your Problems – This is the crux of Ben’s argument above. Google’s tools aren’t designed to solve a small flow issue in an SME network. They’re designed to keep the lights on in an organization that maps the world and provides video content to billions of people. Google tools are purpose built. And they aren’t flexible outside that purpose. They are built to be scalable, not flexible.

Down To Earth

Since Google’s scale numbers are hard to comprehend, let’s look at a better example from days gone by. I’m talking about the Cisco Aironet-to-LWAPP Upgrade Tool:

I used this a lot back in the day to upgrade autonomous APs to LWAPP controller-based APs. It was a very simple tool. It did exactly what it said in the title. And it didn’t do much more than that. You fed it an image and pointed it at an AP and it did the rest. There was some magic on the backend of removing and installing certificates and other necessary things to pave the way for the upgrade, but it was essentially a batch TFTP server.

It was simple. It didn’t check that you had the right image for the AP. It didn’t throw out good error codes when you blew something up. It only ran on a maximum of 5 APs at a time. And you had to close the tool every three or four uses because it had a memory leak! But, it was a still a better choice than trying to upgrade those APs by hand through the CLI.

This tool is over ten years old at this point and is still available for download on Cisco’s site. Why? Because you may still need it. It doesn’t scale to 1,000 APs. It doesn’t give you any other functionality other than upgrading 5 Aironet APs at a time to LWAPP (or CAPWAP) images. That’s it. That’s the purpose of the tool. And it’s still useful.

Tools like this aren’t built to be the ultimate solution to every problem. They don’t try to pack in every possible feature to be a “single pane of glass” problem solver. Instead, they focus on one problem and solve it better than anything else. Now, imagine that tool running at a scale your mind can’t comprehend. And you’ll know now why Google builds their tools the way they do.


Tom’s Take

I have a constant discussion on Twitter about the phrase “begs the question”. Begging the question is a logical fallacy. Almost every time the speaker really means “raises the question”. Likewise, every time you think you need to use a Google tool to solve a problem, you’re almost always wrong. You’re not operating at the scale necessary to need that solution. Instead, the majority of people looking to implement Google solutions in their networks are like people that put chrome everything on a car. They’re looking to show off instead of get things done. It’s time to retire the Google Cargo Cult and instead ask ourselves what problems we’re really trying to solve, as Ben Sigelman mentions above. I think we’ll end up much happier in the long run and find our work lives much less complicated.

Advertisements

Why I Won’t Be Getting Google Glass

google_glass_grey-580-90

You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.

About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.

I won’t be participating in generation one of Google Glass.

I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.

From Entry to Profit

Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time

Eye of the Beholder

One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.

Mind the Leash

According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command.  Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it?  There’s the rub in the first generation Glass units.  You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot.  I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype.  Many will stop me here and interject that WiFi hotspot access is fairly common now.  All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace.  How does that work from a mountain top?  What if I had a video that I wanted to post right away from the middle of the ocean?  How exactly do you livestream video while skydiving over the Moscone Center during Google I/O?  Here’s a hint:  You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air.  Not as magical when you strip all the layers away.  For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter.  Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.

Tom’s Take

Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.

If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com

Software Defined Cars

CarLights

I think everything in the IT world has been tagged as “software defined” by this point. There’s software defined networking, software defined storage, the software defined data center, and so on. Given that the definitions of the things I just enumerated are very hard to nail down, it’s no surprise that many in the greater IT community just roll their eyes when they start hearing someone talk about SD.

I try to find ways to discuss advanced topics like this with people that may not understand what a hypervisor is or what a forwarding engine is really supposed to be doing. The analogies that I come up usually relate to everyday objects that are familiar to my readers. If I can frame the Internet as a highway and help people “get it,” then I’ve succeeded.

During one particularly interesting discussion, I started trying to relate SDN to the automobile. The car is a fairly stable platform that has been iterated upon many times in the 150 years that it has been around. We’ve seen steam-powered single seat models give way to 8+ passenger units capable of hauling tons of freight. It is a platform that is very much defined by the hardware. Engines and seating are the first things that spring to mind, but also wheels and cargo areas. The difference between a sports car and an SUV is very apparent due to hardware, much in the same way that a workgroup access switch only resembles a core data center switch in the most basic terms.

This got me to thinking: what would it take to software define a car? How could I radically change the thinking behind an automobile with software. At first, I thought about software programs running in the engine that assist the driver with things like fuel consumption or perhaps an on-demand traction and ride handling system. Those are great additional features for sure but they don’t really add anything to the base performance of a car beyond a few extra tweaks. Even the most advanced “programming” tools that are offered for performance specialists that allow for the careful optimization of transmission shifting patterns and fuel injector mixture recalibration don’t really fall into the software defined category. While those programs offer a way to configure the car in a manner different from the original intent they are difficult to operate and require a great deal of special knowledge to configure in the first place.

That’s when it hit me like a bolt out of the blue. We already have a software defined car. Google has been testing it for years. Only they call it a Driverless Car. Think about it in terms of our favorite topic of SDN. Google has taken the hardware that we are used to (the car) and replaced the control plane with a software construct (the robot steering mechanism). The software is capable of directing the forwarding of the hardware with no user intervention, as illustrated in this video:

That’s a pretty amazing feat when you think about it. Of course, programming a car to drive itself isn’t easy. There’s a ton of extra data that is generated as a car learns to drive itself that must be taken into account. In much the same way, the network is going to generate mountains of additional data that needs to be captured by some kind of sensor or management platform. That extra data represents the network feedback that allows you to do things like steer around obstacles, whether they be a deer in the road or a saturated uplink to a cloud provider.

In addition, the idea of a driverless software defined car is exciting because of the disruption that it represents. Once we don’t need a cockpit with a steering mechanism or access to propulsion mechanisms directly at our fingertips (or feet), we can go about breaking about the historical construction of a car and make it a more friendly concept. Why do I need to look forward when my car does all the work? Why can’t I twist the seats 90 degrees and facilitate conversation among passengers while the automation is occuring? Why can’t I put in an uplink to allow me to get work done or a phone to make calls now that my attention doesn’t need to be focused on the road? When the car is doing all the driving, there are a multitude of ideas that need to be reconsidered for how we design the automobile.

When I started bouncing this idea off of some people, Stephen Foskett (@SFoskett) mentioned to me that some people would take issue with my idea of a software defined car because it’s a self-contained, closed ecosystem. What about a software defined network that collects data and provides for greater visibility to the management layer? Doesn’t it need to be a larger system in order to really take advantage of software definition? That’s the beauty of the software defined piece. Once we have a vehicle generating large amounts of actionable data, we can now collect that and do something with it. Google has traffic data in their Maps application. What if that data was being fed in real time by the cars themselves? What if the car could automatically recognize traffic congestion and reroute on the fly instead of merely suggesting that the driver take an alternate path? What if we could load balance our highway system efficiently because the car is getting real time data about conditions. Now Google has the capability to use their software defined endpoints to reconfigure as needed.

What if that same car could automatically sense that you were driving to the airport and check you into your flight based on arrival time without the need to intervene? How about inputting a destination, such as a restaurant or a sporting event and having the car instantly reserve a parking spot near the venue based on reports from cars already in the lot or from sensors that report the number of empty spots in a parking garage nearby? The possibilities are really limitless even in this first or second stage. The key is that we capture the generated data from the software pieces that we install on top of existing hardware. We never knew we could do this because the interface into the data never existed prior to creating a software layer that we could interact with.  When you look at what Google has already done with their recent acquisition of Waze, the social GPS and map application it does look like Google is starting down this path.  Why rely on drivers to update the Waze database when the cars can do it for you?


Tom’s Take

I have spent a very large portion of my IT career driving to and from customer sites. The idea of a driverless car is appealing, but it doesn’t really help me to just sit over in the passenger seat and watch a computer program do my old job. I still like driving long distances to a certain extent. I don’t want to lose that. It’s when I can start using the software layer to enable things that I never thought possible that I start realizing the potential. Rather than just looking as the driverless software defined car as a replacement for drivers, the key is to look at the potential that it unlocks to be more efficient and make me more productive on the road. That’s the key take away for us all. Those lessons can also be applied to the world of software defined networking/storage/data center as well. We just have to remember to look past the hype and marketing and realize what the future holds in store.

A Chrome-Plated Workout

I’ve had my CR-48 for about two weeks now, and I’ve put it through it’s paces.  I used it to take notes at Tech Field Day 5.  I set up an IRC channel for people to ask questions during the event.  I’ve written numerous blog posts on the little laptop.  I’ve used it to chat with people halfway around the world.  All in all, I’m impressed with the unit.  That’s not to say that everything about it has me thrilled.

The Good

I like the fact that the CR-48 is instantly on when I lift the lid.  The SSD and the lightweight OS team up to make it quite easy to just grab and fire up to start using for notes or web surfing.  It’s not quite as fast as an iPad, but much faster than hauling out my Lenovo w701 behemoth.  I like having the CR-48 handy for things I would rather do with a keyboard.

More than a few people have remarked to me that it looks “just like a MacBook”.  And I’ve come to see it much like a MacBook Air.  Obviously it’s not as sleek as Apple’s little wonder, but I like the form factor and the screen resolution much better than some of the other netbooks I’ve used.  It doesn’t feel cramped and toy-like.  In fact, it feels more Mac-like than any other laptop I’ve used.  I’m sure that is intentional on the part of Google.

Having the 3G Verizon radio is pretty handy in situations where there is no Wi-Fi available.  More than once I found myself unable to connect to a certificate-based wireless system (a known issue) or stuck in a place with terrible reception.  With the CR-48, I just switch over to the 3G radio and keep plugging away.  The 100MB allowed with the trial is a little anemic for heavy-duty use, but the bigger plans seem fairly priced should I find the need to upgrade to one.  When I tried activating the radio over the phone, the Verizon rep made sure to point out that they had plans available in all sizes for me to purchase, but somehow skipped over the part about me having 100MB for free each month.  Luckily I read the instructions.

The Bad

The CR-48 isn’t without it’s annoyances.  The touchpad is probably the most persistent issue I had.  The tap-to-click functionality found on most trackpads was bordering on annoying for me.  I’m a touch typist with hands the size of a gorilla.  I tend to rest my thumbs at the bottom of the keyboard as I type and on this laptop that means brushing the trackpad more often than not.  With the default settings, I often found myself sending e-mail or canceling tweets without realizing what happened, or my cursor shooting over to a random section of my blog post and my words spilling into other thoughts.  I finally gave up and disabled the tap-to-click setup, ironically making it more like a MacBook.

I also made the mistake of letting the battery run down all the way.  It was already low from use and I let it go to sleep without plugging it in.  Sure enough, it drained down and wouldn’t power back up.  Once I plugged it in I was able to use it, but it wouldn’t charge no matter how long I left it plugged in.  It took some searching on the Internet to find an acceptable solution (of which there appear to be many) before settling on a combination of things.  I pulled the battery for about 2 minutes, then reattached it and CAREFULLY plugged the adapter back in.  As soon as I saw the orange charging light come on, I finished pushing the charger all the way in and it worked for me after that.  There are rumors that the port and/or the charger are a little substandard, so this is something that is going to bear a little more inspection.  Speaking of the charger, the fact that it uses a three-pronged plug is a little annoying when I’m trying to find a place to plug in.  I’ve taken to carrying a little 2-prong grounding adapter in my bag just so I can plug in anywhere.  Not an expensive solution, but something I wish I didn’t have to do.

One final annoyance was a minor issue that turned into a humorous solution.  When I unboxed the unit and fired it up the first time, it seemed that playing two audio streams on top of each other would cause the speaker to short out and sound like I was choking a robot.  There was evidently a fix for it, but there seemed to be an issue with the netbook pulling the new update as it was only a point release and very minor.  Every time I checked the system updater, it told me the system was up to date.  The fix I found on the Internet suggested to click the Update button repeatedly until the system finally recognized the new update.  Literally, I clicked 50 times in order to get the update.  It did fix my audio issues, but you would think the update system would recognize a new release was out without me needing to be spastic with the update button.

Tom’s Take

Over all I’m thrilled with the CR-48 after a couple of weeks of exposure.  I keep it in my bag at all times, ready to go when necessary.  When I head back to Wireless Field Day in March, I’m planning on leaving the behemoth behind and only taking my CR-48 and my iPad for connectivity.  I figure cutting down on the extra 12 pounds of weight will be good for my posture and not having to haul an extra laptop out at the TSA Security and Prostate Screeing Checkpoint is always welcome to not only myself but the other passengers as well.  I’m also debating whether or not to flip over into developer mode to see if that has any additional tricks I can try out.  I don’t know if it’ll increase my productivity any more, but having a few extra knobs and switches to play with is never a bad thing.