Are We The Problem With Wearables?

applewatchface
Something, Something, Apple Watch.

Oh, yeah. There needs to be substance in a wearable blog post. Not just product names.

Wearables are the next big product category that is driving innovation. The advances being made in screen clarity, battery life, and component miniaturization are being felt across the rest of the device market. I doubt Apple would have been able to make the new Macbook logic board as small as it is without a few things learned from trying to cram transistors into a watch case. But, are we the people sending the wrong messages about wearable technology?

The Little Computer That Could

If you look at the biggest driving factor behind technology today, it comes down to size. Technology companies are making things smaller and lighter with every iteration. If the words thinnest and lightest don’t appear in your presentation at least twice then you aren’t on the cutting edge. But is this drive because tech companies want to make things tiny? Or is it more that consumers are driving them that way?

Yes, people the world over are now complaining that technology should have other attributes besides size and weight. A large contingent says that battery life is now more important than anything else. But would you be okay with lugging around an extra pound of weight that equates to four more hours of typing time? Would you give up your 13-inch laptop in favor of a 17-inch model if the battery life were doubled?

People send mixed signals about the size and shape of technology all the time. We want it fast, small, light, powerful, and with the ability to run forever. Tech companies give us as much as they can, but tradeoffs must be made. Light and powerful usually means horrible battery life. Great battery life and low weight often means terrible performance. No consumer has ever said, “This product is exactly what I wanted with regards to battery, power, weight, and price.”

Where Wearables Dare

As Jonny Ive said this week, “The keyboard dictated the size of the new Macbook.” He’s absolutely right. Laptops and Desktops have a minimum size that is dictated by the screen and keyboard. Has anyone tried typing on a keyboard cover for and iPad? How about an iPad Mini cover? It’s a miserable experience, even if you don’t have sausage fingers like me. When the size of the device dictates the keyboard, you are forced to make compromises that impact user experience.

With wearables, the bar shifts away from input to usability. No wearable watch has a keyboard, virtual or otherwise. Instead, voice control is the input method. Spoken words drive communication beyond navigation. For some applications, like phone calls and text messages, this is preferred. But I can’t imagine typing a whole blog post or coding on a watch. Nor should I. The wearable category is not designed for hard-core computing use.

That’s where we’re getting it wrong. Google Glass was never designed to replace a laptop. Apple Watch isn’t going to replace an iPhone, let alone an iMac. Wearable devices augment our technology workflows instead of displacing them. Those fancy monocles you see in sci-fi movies aren’t the entire computer. They are just an interface to a larger processor on the back end. Trying to shrink a laptop to the size of a silver dollar is impossible. If it were, we’d have that by now.

Wearables are designed to give you information at a glance. Google Glass allows you to see notifications easily and access information. Smart watches are designed to give notifications and quick, digestible snippets of need-to-know information. Yes, you do have a phone for that kind of thing. But my friend Colin McNamara said it best:

I can glance at my watch and get a notification without getting sucked into my phone


Tom’s Take

That’s what makes the wearable market so important. It’s not having the processing power of a Cray supercomputer on your arm or attached to your head. It’s having that power available when you need it, yet having the control to get information you need without other distractions. Wearables free you up to do other things. Like building or creating or simply just paying attention to something. Wearables make technology unobtrusive, whether it’s a quick text message or tracking the number of steps you’ve taken today. Sci-Fi is filled with pictures of amazing technology all designed to do one thing – let us be human beings. We drive the direction of product development. Instead of crying for lighter, faster, and longer all the time, we should instead focus on building the right interface for what we need and tell the manufacturers to build around that.

 

Why I Won’t Be Getting Google Glass

google_glass_grey-580-90

You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.

About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.

I won’t be participating in generation one of Google Glass.

I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.

From Entry to Profit

Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time

Eye of the Beholder

One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.

Mind the Leash

According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command.  Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it?  There’s the rub in the first generation Glass units.  You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot.  I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype.  Many will stop me here and interject that WiFi hotspot access is fairly common now.  All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace.  How does that work from a mountain top?  What if I had a video that I wanted to post right away from the middle of the ocean?  How exactly do you livestream video while skydiving over the Moscone Center during Google I/O?  Here’s a hint:  You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air.  Not as magical when you strip all the layers away.  For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter.  Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.

Tom’s Take

Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.

If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com

The Google Glass Ceiling

I finally got around to watching the Charlie Rose interview with Sebastian Thrun.  Thrun is behind a lot of very promising technology, the least of which is the Google Glass project.  Like many, I kind of put this out of my mind at the outset, dismissing it as a horrible fashion trend at best and a terribly complicated idea at worst.  Having seen nothing beyond the concept videos that are currently getting lots of airplay, I was really tepid about the whole concept and wanted to see it baked a little more before I really bought into the idea of carrying my smartphone around on my head instead of my hip.  Then I read another interesting piece about the future of Google and Facebook.  In and of itself, the blog post has some interesting prognostications about the directions that Facebook and Google are headed.  But one particular quote caught my eye in both the interview and the future article.  Thrun says that the most compelling use case for Google Glass right now that they can think of is for taking pictures and sharing them with people on Google+.  Charlie Rose even asked about other types of applications, like augmented reality.  Thrun dismissed these in favor of talking about how easy it was to take pictures by blinking and nodding your head.  Okay, I’m going to have to take a moment here…

Sebastian Thrun, have you lost your mind?!?

Seriously.  You have a project sitting on your ears that has the opportunity to change the way that people like me view the world and the best use case you can think of today is taking pictures of ice cream and posting it to a dying social network?  Does. Not. Compute.  Honestly, I can’t even begin to describe how utterly dumbstruck I am by this.  After spending a little more time looking into Google Glass, I’m giddy with anticipation with what I can do with this kind of idea.  However, it appears that the current guardians of the technology seem fit to shoehorn this paradigm-shifting concept into a camera case.

When I think of augmented reality applications, I think of the astronomy apps I see on the iPad that let me pick out constellations with my kids.  I can see the ones in the Southern Hemisphere just by pointing my Fruity Tablet at the ground.  Think of programs like Word Lens that allow me to instantly translate signs in a foreign language into something I can understand.  That’s the technology we have today that you can buy from the App Store.  Seriously.  No funky looking safety glasses required.  Just imagine that technology in a form factor where it’s always available without the need to take out your phone or tablet.  That’s what we can do at this very minute.  That doesn’t take much imagination at all.  Google Glass could be the springboard that launches so much more.

Imagine having an instant portal to a place like Wikipedia where all I have to do is look at an object and I can instantly find out everything I need to know about it.  No typing or dictating. All I need to do is glance at the TARDIS USB hub on my desk and I am instantly linked to the Wikipedia TARDIS page.  Take the Word Lens idea one step further.  Now, instead of only reading signs, let the microphone on the Google Glass pick up the foreign language being spoken and provide real-time translation in subtitles on the Glass UI.  Instant understanding with possibilities of translating back into the speaker’s language and display of phrases to respond with.  How about the ability to display video on the UI for things like step-by-step instructions of disassembling objects or repairing things?  I’d even love to have a Twitter feed displayed just outside my field of vision that I can scroll through with my eye movements.  That way, I can keep up with what’s going on that’s important to me without needing to lift a finger.  The possibilities are endless for something like this.  If only you can see past the ability to post pointless pictures to your Picasa account.

There are downsides to Google Glass too.  People are having a hard time interacting as it is today with the lure of instant information at their fingertips.  Imagine how bad it will be when they don’t have to make the effort of pulling out their phone.  I can see lots of issues with people walking into doors or parked cars because they were too busy paying attention to their Glass information and not as much time watching where they were walking.  Google’s web search page has made finding information a fairly trivial issue even today.  Imagine how much lazier people will be if all they have to do is glance at the web search and ask “How many ounces are in a pound?”.  Things will no longer need to be memorized, only found.  It’s like my teacher’s telling me not to be reliant on a calculator for doing math.  Now, everyone has a calculator on their phone.

Tom’s Take

In 2012, the amount of amazing technology that we take for granted astonishes me to no end.  If you had told me in the 1990s that we would have a mini computer in our pocket that has access to the whole of human knowledge and allows me to communicate with my friends and peers around the world instantly, I’d have scoffed at your pie-in-the-sky dreams.  Today, I don’t think twice about it.  I no longer need an alarm clock, GPS receiver, pocket camera, or calculator.  Sadly, the kind of thinking that has allowed technology like this to exist doesn’t appear to be applied to new concepts like Google Glass.  The powers that be at GoogleX can’t seem to understand the gold mine they’re sitting on.  Sure, maybe applying the current concepts of sharing pictures might help ease transition of new users to this UI concept.  I would hazard that people are going to understand what to do with Google Glass well beyond taking a snapshot of their lunch sushi and sharing with their Foodies circle.  Instead, show us the real groundbreaking stuff like the ideas that I’ve already discussed.  Go read some science fiction or watch movies like The Terminator, where the T-800s have a very similar UI to what you’re developing.  That’s where people want to see the future headed.  Not reinventing the Polaroid camera for the fifth time this year.  And if you’re having that much trouble coming up with cool ideas or ways to sell Google Glass to the nerds out there today, give me a call.  I can promise you we’ll blast through that glass ceiling you’ve created for yourself like the SpaceX Dragon lifting off for the first time.  I may not be able to code as well as other people at GoogleX, but I can promise you I’ve got the vision for your project.