Virtual Reality and Skeuomorphism

Remember skeuomorphism? It’s the idea that the user interface of a program needs to resemble a physical a physical device to help people understand how to use it. Skeuomorphism is not just a software thing, however. Things like faux wooden panels on cars and molded clay rivets on pottery are great examples of physical skeuomorphism. However, most people will recall the way that Apple used skeuomorphism in the iOS when they hear the term.

Scott Forrestal was the genius behind the skeuomorphism in iOS for many years. Things like adding a fake leather header to the Contacts app, the wooden shelves in the iBooks library, and the green felt background in the Game Center app are the examples that stand out the most. Forrestal used skeuomorphism to help users understand how to use apps on the new platform. Users needed to be “trained” to touch the right tap targets or to feel more familiar with an app on sight.

Skeuomorphism worked quite well in iOS for many years. However, when Jonny Ive took over as the lead iOS developer, he started phasing out skeuomorphism starting in iOS 7. With the advent of flat design, people didn’t want fake leather and felt any longer. They wanted vibrant colors and integrated designs. As Apple (and others) felt that users had been “trained” well enough, the decision was made to overhaul the interface. However, skeuomorphism is poised to make a huge comeback.

Virtual Fake Reality

The place where skeuomorphism is about to become huge again is in the world of virtual reality (VR) and augmented reality (AR). VR apps aren’t just limited to games. As companies start experimenting with AR and VR, we’re starting to see things emerge that are changing the way we think about the use of these technologies. Whether it be something as simple as using the camera on your phone combined with AR to measure the length of a rug or using VR combined with a machinery diagram to teach someone how to replace a broken part without the need to send an expensive technician.

Look again at the video above of the AR measuring app. It’s very simple, but it also displays a use of skeuomorphism. Instead of making the virtual measuring tape a simple arrow with a counter to keep track of the distance, it’s a yellow box with numbers printed every inch. Just like the physical tape measure that it is displayed beside. It’s a training method used to help people become acclimated to a new idea by referencing a familiar object. Even though a counter with tenths of an inch would be more accurate, the developer chose to help the user with the visualization.

Let’s move this idea along further. Think of a more robust VR app that uses a combination of eye tracking and hand motions to give access to various apps. We can easily point to what we want with hand tracking or some kind of pointing device in our dominant hand. But what if we want to type? The system can be programmed to respond if the user places their hands palms down 4 inches apart. That’s easy to code. But how to do tell the user that they’re ready to type? The best way is to paint a virtual keyboard on the screen, complete with the user’s preferred key layout and language. It triggers the user to know that they can type in this area.

How about adjusting something like a volume level? Perhaps the app is coded to increase and reduce volume if the hand is held with fingers extended and the wrist rotated left or right. How would the system indicate this to the user? With a circular knob that can be grasped and manipulated. The ideas behind these applications for VR training are only limited by the designers.


Tom’s Take

VR is going to lean heavily on skeuomorphism for many years to come. It’s one thing to make a 2D user interface resemble an amplifier or a game table. But when it comes to recreating constructs in 3D space, you’re going to need to train users heavily to help them understand the concepts in use. Creating lookalike objects to allow users to interact in familiar ways will go a long way to helping them understand how VR works as well as helping the programmers behind the system build a user experience that eases VR adoption. Perhaps my kids or my grandkids will have VR and AR systems that are less skeuomorphic, but until then I’m more than happy to fiddle with virtual knobs if it means that VR adoption will grow more quickly.

The Google Glass Ceiling

I finally got around to watching the Charlie Rose interview with Sebastian Thrun.  Thrun is behind a lot of very promising technology, the least of which is the Google Glass project.  Like many, I kind of put this out of my mind at the outset, dismissing it as a horrible fashion trend at best and a terribly complicated idea at worst.  Having seen nothing beyond the concept videos that are currently getting lots of airplay, I was really tepid about the whole concept and wanted to see it baked a little more before I really bought into the idea of carrying my smartphone around on my head instead of my hip.  Then I read another interesting piece about the future of Google and Facebook.  In and of itself, the blog post has some interesting prognostications about the directions that Facebook and Google are headed.  But one particular quote caught my eye in both the interview and the future article.  Thrun says that the most compelling use case for Google Glass right now that they can think of is for taking pictures and sharing them with people on Google+.  Charlie Rose even asked about other types of applications, like augmented reality.  Thrun dismissed these in favor of talking about how easy it was to take pictures by blinking and nodding your head.  Okay, I’m going to have to take a moment here…

Sebastian Thrun, have you lost your mind?!?

Seriously.  You have a project sitting on your ears that has the opportunity to change the way that people like me view the world and the best use case you can think of today is taking pictures of ice cream and posting it to a dying social network?  Does. Not. Compute.  Honestly, I can’t even begin to describe how utterly dumbstruck I am by this.  After spending a little more time looking into Google Glass, I’m giddy with anticipation with what I can do with this kind of idea.  However, it appears that the current guardians of the technology seem fit to shoehorn this paradigm-shifting concept into a camera case.

When I think of augmented reality applications, I think of the astronomy apps I see on the iPad that let me pick out constellations with my kids.  I can see the ones in the Southern Hemisphere just by pointing my Fruity Tablet at the ground.  Think of programs like Word Lens that allow me to instantly translate signs in a foreign language into something I can understand.  That’s the technology we have today that you can buy from the App Store.  Seriously.  No funky looking safety glasses required.  Just imagine that technology in a form factor where it’s always available without the need to take out your phone or tablet.  That’s what we can do at this very minute.  That doesn’t take much imagination at all.  Google Glass could be the springboard that launches so much more.

Imagine having an instant portal to a place like Wikipedia where all I have to do is look at an object and I can instantly find out everything I need to know about it.  No typing or dictating. All I need to do is glance at the TARDIS USB hub on my desk and I am instantly linked to the Wikipedia TARDIS page.  Take the Word Lens idea one step further.  Now, instead of only reading signs, let the microphone on the Google Glass pick up the foreign language being spoken and provide real-time translation in subtitles on the Glass UI.  Instant understanding with possibilities of translating back into the speaker’s language and display of phrases to respond with.  How about the ability to display video on the UI for things like step-by-step instructions of disassembling objects or repairing things?  I’d even love to have a Twitter feed displayed just outside my field of vision that I can scroll through with my eye movements.  That way, I can keep up with what’s going on that’s important to me without needing to lift a finger.  The possibilities are endless for something like this.  If only you can see past the ability to post pointless pictures to your Picasa account.

There are downsides to Google Glass too.  People are having a hard time interacting as it is today with the lure of instant information at their fingertips.  Imagine how bad it will be when they don’t have to make the effort of pulling out their phone.  I can see lots of issues with people walking into doors or parked cars because they were too busy paying attention to their Glass information and not as much time watching where they were walking.  Google’s web search page has made finding information a fairly trivial issue even today.  Imagine how much lazier people will be if all they have to do is glance at the web search and ask “How many ounces are in a pound?”.  Things will no longer need to be memorized, only found.  It’s like my teacher’s telling me not to be reliant on a calculator for doing math.  Now, everyone has a calculator on their phone.

Tom’s Take

In 2012, the amount of amazing technology that we take for granted astonishes me to no end.  If you had told me in the 1990s that we would have a mini computer in our pocket that has access to the whole of human knowledge and allows me to communicate with my friends and peers around the world instantly, I’d have scoffed at your pie-in-the-sky dreams.  Today, I don’t think twice about it.  I no longer need an alarm clock, GPS receiver, pocket camera, or calculator.  Sadly, the kind of thinking that has allowed technology like this to exist doesn’t appear to be applied to new concepts like Google Glass.  The powers that be at GoogleX can’t seem to understand the gold mine they’re sitting on.  Sure, maybe applying the current concepts of sharing pictures might help ease transition of new users to this UI concept.  I would hazard that people are going to understand what to do with Google Glass well beyond taking a snapshot of their lunch sushi and sharing with their Foodies circle.  Instead, show us the real groundbreaking stuff like the ideas that I’ve already discussed.  Go read some science fiction or watch movies like The Terminator, where the T-800s have a very similar UI to what you’re developing.  That’s where people want to see the future headed.  Not reinventing the Polaroid camera for the fifth time this year.  And if you’re having that much trouble coming up with cool ideas or ways to sell Google Glass to the nerds out there today, give me a call.  I can promise you we’ll blast through that glass ceiling you’ve created for yourself like the SpaceX Dragon lifting off for the first time.  I may not be able to code as well as other people at GoogleX, but I can promise you I’ve got the vision for your project.