You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.
About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.
I won’t be participating in generation one of Google Glass.
I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.
From Entry to Profit
Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time
Eye of the Beholder
One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.
Mind the Leash
According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command. Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it? There’s the rub in the first generation Glass units. You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot. I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype. Many will stop me here and interject that WiFi hotspot access is fairly common now. All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace. How does that work from a mountain top? What if I had a video that I wanted to post right away from the middle of the ocean? How exactly do you livestream video while skydiving over the Moscone Center during Google I/O? Here’s a hint: You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air. Not as magical when you strip all the layers away. For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter. Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.
Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.
If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com
I can completely understand why some of us won’t be throwing cash at Google to beta test their products, but I feel the other couple of reasons miss the point a bit.
Is it a good enough reason to level a criticism at Glass for not doing eye-tracking when that was something you had, possibly naively, assumed/hoped it would do? Shouldn’t we be judging new technologies based on what they can do, rather than can’t? The original iPhone didn’t have then-available technology in 3G, yet it was still enough of a game-changer for people to buy it in the millions.
Also, I seriously disagree on Glass becoming yet another mobile device with a data connection for uploading it’s content. It’ll make it more expensive to make, buy and use (additional contract) and consume more battery. Plus, to gripe about not being able to upload a picture when on a mountain is a bit of a cheap shot when the same can be said about a smartphone in that situation and is a mobile coverage issue, not a Glass limitation.
Tethering is the only option that makes sense – I get it free with my £6 ($9) a month sim-only contract here in the UK and it’s what smartwatches use now and will use going forward. I personally think having two data contracts for a smartphone and a tablet is silly, imagine if that was to stretch to three including Glass or even four with an iWatch or something.
That only really leaves the issue of money and the $1500. There’s no two ways about that, it’s certainly a lot for a beta product you’re expected to go and collect and then give feedback on.