Are We The Problem With Wearables?

applewatchface
Something, Something, Apple Watch.

Oh, yeah. There needs to be substance in a wearable blog post. Not just product names.

Wearables are the next big product category that is driving innovation. The advances being made in screen clarity, battery life, and component miniaturization are being felt across the rest of the device market. I doubt Apple would have been able to make the new Macbook logic board as small as it is without a few things learned from trying to cram transistors into a watch case. But, are we the people sending the wrong messages about wearable technology?

The Little Computer That Could

If you look at the biggest driving factor behind technology today, it comes down to size. Technology companies are making things smaller and lighter with every iteration. If the words thinnest and lightest don’t appear in your presentation at least twice then you aren’t on the cutting edge. But is this drive because tech companies want to make things tiny? Or is it more that consumers are driving them that way?

Yes, people the world over are now complaining that technology should have other attributes besides size and weight. A large contingent says that battery life is now more important than anything else. But would you be okay with lugging around an extra pound of weight that equates to four more hours of typing time? Would you give up your 13-inch laptop in favor of a 17-inch model if the battery life were doubled?

People send mixed signals about the size and shape of technology all the time. We want it fast, small, light, powerful, and with the ability to run forever. Tech companies give us as much as they can, but tradeoffs must be made. Light and powerful usually means horrible battery life. Great battery life and low weight often means terrible performance. No consumer has ever said, “This product is exactly what I wanted with regards to battery, power, weight, and price.”

Where Wearables Dare

As Jonny Ive said this week, “The keyboard dictated the size of the new Macbook.” He’s absolutely right. Laptops and Desktops have a minimum size that is dictated by the screen and keyboard. Has anyone tried typing on a keyboard cover for and iPad? How about an iPad Mini cover? It’s a miserable experience, even if you don’t have sausage fingers like me. When the size of the device dictates the keyboard, you are forced to make compromises that impact user experience.

With wearables, the bar shifts away from input to usability. No wearable watch has a keyboard, virtual or otherwise. Instead, voice control is the input method. Spoken words drive communication beyond navigation. For some applications, like phone calls and text messages, this is preferred. But I can’t imagine typing a whole blog post or coding on a watch. Nor should I. The wearable category is not designed for hard-core computing use.

That’s where we’re getting it wrong. Google Glass was never designed to replace a laptop. Apple Watch isn’t going to replace an iPhone, let alone an iMac. Wearable devices augment our technology workflows instead of displacing them. Those fancy monocles you see in sci-fi movies aren’t the entire computer. They are just an interface to a larger processor on the back end. Trying to shrink a laptop to the size of a silver dollar is impossible. If it were, we’d have that by now.

Wearables are designed to give you information at a glance. Google Glass allows you to see notifications easily and access information. Smart watches are designed to give notifications and quick, digestible snippets of need-to-know information. Yes, you do have a phone for that kind of thing. But my friend Colin McNamara said it best:

I can glance at my watch and get a notification without getting sucked into my phone


Tom’s Take

That’s what makes the wearable market so important. It’s not having the processing power of a Cray supercomputer on your arm or attached to your head. It’s having that power available when you need it, yet having the control to get information you need without other distractions. Wearables free you up to do other things. Like building or creating or simply just paying attention to something. Wearables make technology unobtrusive, whether it’s a quick text message or tracking the number of steps you’ve taken today. Sci-Fi is filled with pictures of amazing technology all designed to do one thing – let us be human beings. We drive the direction of product development. Instead of crying for lighter, faster, and longer all the time, we should instead focus on building the right interface for what we need and tell the manufacturers to build around that.

 

Why I Won’t Be Getting Google Glass

google_glass_grey-580-90

You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.

About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.

I won’t be participating in generation one of Google Glass.

I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.

From Entry to Profit

Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time

Eye of the Beholder

One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.

Mind the Leash

According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command.  Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it?  There’s the rub in the first generation Glass units.  You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot.  I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype.  Many will stop me here and interject that WiFi hotspot access is fairly common now.  All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace.  How does that work from a mountain top?  What if I had a video that I wanted to post right away from the middle of the ocean?  How exactly do you livestream video while skydiving over the Moscone Center during Google I/O?  Here’s a hint:  You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air.  Not as magical when you strip all the layers away.  For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter.  Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.

Tom’s Take

Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.

If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com