The Slippery Slope of Social Sign-In

FBTalons

At the recent Wireless Field Day 6, we got a chance to see a presentation from AirTight Networks about their foray into Social Wifi. The idea is that business can offer free guest wifi for customers in exchange for a Facebook Like or by following the business on Twitter. AirTight has made the process very seamless by allowing an integrated Facebook login button. Users can just click their way to free wifi.

I’m a bit guarded about this whole approach. It has nothing to do with AirTight’s implementation. In face, several other wireless companies are racing to have similar integration. It does have everything to do with the way that data is freely exchanged in today’s society. Sometimes more freely than it should.

Don’t Forget Me

Facebook knows a lot about me. They know where I live. They know who my friends are. They know my wife and how many kids we have. While I haven’t filled out the fields, there are others that have indicated things like political views and even more personal information like relationship status or sexual orientation. Facebook has become a social data dump for hundreds of millions of people.

For years, I’ve said that Facebook holds the holy grail of advertising – an searchable database of everything a given demographic “likes”. Facebook knows this, which is why they are so focused on growing their advertising arm. Every change to the timeline and every misguided attempt to “fix” their profile security has a single aim: convincing business to pay for access to your information.

Now, with social wifi, those business can get access to a large amount of data easily. When you create the API integration with Facebook, you can indicate a large number of discreet data points easily. It’s just a bunch of checkboxes. Having worked in IT before, I know the siren call that could cause a business owner to check every box he could with the idea that it’s better to collect more data rather than less. It’s just harmless, right?

Give It Away Now

People don’t safeguard their social media permissions and data like they should. If you’ve ever gotten DM spam from a follower or watched a Facebook wall swamped with “on behalf of” postings you know that people are willing to sign over the rights to their accounts for a 10% discount coupon or a silly analytics game. And that’s after the warning popup telling the user what permissions they are signing away. What if the data collection is more surreptitious?

The country came unglued when it was revealed that a government agency was collecting metadata and other discreet information about people that used online services. The uproar led to hearings and debate about how far reaching that program was. Yet many of those outraged people don’t think twice about letting a coffee shop have access to a wealth of data that would make the NSA salivate.

Providers are quick to say that there are ways to limit how much data is collected. It’s trivial to disable the ability to see how many children a user has. But what if that’s the data the business wants? Who is to say that Target or Walmart won’t collect that information for an innocent purpose today only to use it to target advertisements to users at a later date. That’s the exact kind of thing that people don’t think about.

Big data and our analytic integrations are allowing it to happen with ease today. The abundance of storage means we can collect everything and keep it forever without needing to worry about when we should throw things away. Ubiquitous wireless connectivity means we are never truly disconnected from the world. Services that we rely on to tell us about appointments or directions collect data they shouldn’t because it’s too difficult to decide how to dispose of it. It may sound a bit paranoid but you would be shocked to see what people are willing to trade without realizing.


Tom’s Take

Given the choice between paying a few dollars for wifi access or “liking” a company’s page on Facebook, I’ll gladly fork over the cash. I’d rather give up something of middling value (money) instead of giving up something more important to me (my identity). The key for vendors investigating social wifi is simple: transparency. Don’t just tell me that you can restrict the data that a business can collect. Show me exactly what data they are collecting. Don’t rely on the generalized permission prompts that Facebook and Twitter provide. If business really want to know how I voted in the last election then the wifi provider has a social responsibility to tell me that before I sign up. If shady businesses are forced to admit they are overstepping their data collection bounds then they might just change their tune. Let’s make technology work to protect our privacy for once.

Linux Lost The Battle But Won The War

I can still remember my first experience with Linux.  I was an intern at IBM in 2001 and downloaded the IBM Linux Client for e-Business onto a 3.5″ floppy and set about installing it to a test machine in my cubicle.  It was based on Red Hat 6.1.  I had lots of fun recompiling kernels, testing broken applications (thanks Lotus Notes), and trying to get basic hardware working (thanks deCSS).  I couldn’t help but think at the time that there was great potential in the software.

I’ve played with Linux on and off for the last twelve years.  SuSE, Novell, Ubuntu, Gentoo, Slackware, and countless other distros too obscure to rank on Google.  Each of them met needs the others didn’t.  Each tried to unseat Microsoft Windows as the predominant desktop OS.  Despite a range of options and configurability, they never quite hit the mark.  I think every year since 2005 has been the “Year of Desktop Linux”.  Yet year after year I see more Windows laptops out there and very few being offered with Linux installed from the factory.  It seems as though Linux might not ever reach the point of taking over the desktop.  Then I saw a chart that forced me to look at the battle in a new perspective:

AndroidDominance

Consider that Android is based on kernel version 3.4 with some Google modifications.  That means it runs Linux under the hood, even if the interface doesn’t look anything like KDE or GNOME.  And it’s running on millions of devices out there.  Phones and tablets in the hands of consumers world wide.  Linux doesn’t need to win the desktop battle any more.  It’s already ahead in the war for computing dominance.

It happened not because Linux was a clearly superior alternative to Windows-based computing.  It didn’t happen because users finally got fed up with horrible “every other version” nonsense from Redmond.  It happened because Linux offered something Windows has never been able to give developers – flexibility.

I’ve said more than once that the inherent flexibility of Linux could be considered a detriment to desktop dominance.  If you don’t like your window manager you can trade it out.  Swap GNOME for xfce or KDE if you prefer something different.  You can trade filesystems if you want.  You can pull out pieces of just about everything whenever you desire, even the kernel.  Without the mantra of forcing the user to accept what’s offered, people not only swap around at the drop of a hat but are also free to spin their own distro whenever they want.  As of this writing, Ubuntu has 72 distinct projects based on the core distro.  Is it a wonder why people have a hard time figuring out what to install?

Android, on the other hand, has minimal flexibility when it comes to the OS.  Google lets the carriers put their own UI customizations in place, and the hacker community has spun some very interesting builds of their own.  But the rank and file mobile device user isn’t going to go out and hack their way to OS nirvana.  They take what’s offered and use it in their daily computing lives.  Android’s development flexibility means it can be installed on a variety of hardware, from low end mobile phones to high end tablets.  Microsoft has much more stringent rules for hardware running their mobile OS.  Android’s licensing model is also a bit more friendly (it’s hard to beat free).

If the market is really driving toward a model of mobile devices replacing larger desktop computing, then Android may have given Linux the lead that it needs in the war for computing dominance.  Linux is already the choice for appliance computing.  Virtualization hypervisors other than Hyper-V are either Linux under the hood or owe much of their success to Linux.  Mobile devices are dominated by Linux.  Analysts were so focused on how Linux was a subpar performer when it came to workstation mindshare that they forgot to see that the other fronts in the battle were being quietly lost by Microsoft.


Tom’s Take

I’m not going to jump right out there and say that Linux is going to take over the desktop any time soon.  It doesn’t have to.  With the backing of Google and Android, it can quietly keep right on replacing desktop machines as they die off and mobile devices start replicating that functionality.  While I spend time on my old desktop PC now, it’s mostly for game playing.  The other functions that I use computers for, like email and web surfing, are slowly being replaced by mobile devices.  Whether or not you realize it, Linux and *BSD make up a large majority of the devices that people use in every day computing.  The hears and minds of the people were won by Linux without unseating the king of the desktop.  All that remains is to see how Microsoft chooses to act.  With a lead like the one Android has already in the mobile market, the war might be over before we know it.

The Compost-PC Era

Generic Mobile Devices

I realized the other day that the vibration motor in my iPhone 5s had gone out.  Thankfully, my device was still covered under warranty.  I set up an appointment to have it fixed at the nearest Apple store.  I figured I’d go in and they’d just pop in a new motor.  It is a simple repair according to iFixit.  I backed my phone up one last time as a precaution.  When I arrived at the store, it took no time to determine what was wrong.

What shocked me was that the Genius tech told me, “We’re just going to replace your whole phone.  We’ll send the old one off to get repaired.”  I was taken aback.  This was a $20 part that should have taken all of five minutes to pop in.  Instead, I got my phone completely replaced after just three months.  As the new phone synced from my last iClould backup, I started thinking about what this means for the future of devices.

Bring Your Own Disposable

Most mobile devices are a wonder of space engineering.  Cramming an extra long battery in with a vibrant color screen and enough storage to satisfy users is a challenge in any device.  Making it small enough and light enough to hold in the palm of your hand is even more difficult.  Compromises must be made.  Devices today are held together as much by glue and adhesive as they are nuts and bolts and screws.  Gaining access to a device to repair a broken part is becoming more and more impossible with each new generation.

I can still remember opening the case on my first PC to add a sound card and an Overdrive processor.  It was a bit scary but led to a career in repairing computers.  I’ve downright terrified to pop open an iPhone.  The ribbon cables are so fragile that it doesn’t take much to render the phone unusable.  Even Apple knows this.  They are much more likely to have the repairs done in a separate facility rather than at the store.  Other than screen replacements, the majority of broken parts result in a new phone being given to the customer.  After all, it’s very easy to replace devices when the data is safe somewhere.

The Cloud Will Save It All

Use of cloud storage and backup is the key to the disposable device trend.  If you tell me that I’m going to lose my laptop and all the data on it I’m going to get a little concerned.  If you tell me that I’m going to lose my phone, I don’t mind as much thanks to the cloud backup I have configured.  In the above case, my data was synced back to my phone as I shopped for a new screen protector.  Just like a corporate system, data loss is the biggest concern on a device.  Cloud storage is a lot like a roaming profile.  I can sync that data back to a fresh device and keep going after a short interruption.  Gone are the wasted hours of reinstallation of operating system and software.

Why repair devices when they can easily be replaced at little cost?  Why should you pay someone to spend their time diagnosing a bad CPU or bad RAM when you can just unwrap a new mobile device, sync your profile and data, and move on with your project?  The implications for PC repair techs are legion.  As are the implications for manufacturers that create products that are easy to open and contain field replaceable parts.

Why go to all the extra effort of making a device that can be easily repaired if it’s much cheaper to just glue it together and recycle what parts you can after it breaks?  Customers have already shown their propensity to upgrade devices with every new cycle each year.  They’d rather buy everything new instead of upgrading the old to match.  That means making the device field repairable (or upgradable) is extra cost you don’t need.  Making devices that aren’t easily fixed in the field means you need to spend less of your budgets training people how to repair them.  In fact, it’s just easier to have the customer send the device back to the manufacturing plant.


Tom’s Take

The cloud has enabled us to keep our data consistent between devices.  While it has helped blur the lines between desktop and mobile device, it has also helped blur the lines tying people to a specific device.  If I can have my phone or tablet replaced with almost no impact, I’m going to elect to have than done rather than finding replacement parts to keep the old one running just a bit longer.  It also means that after pulling the useful parts out of those mildly broken devices that they will end up in the same landfill that analysts are saying will be filled with rejected desktop PCs.

Cisco CMX – Marketing Magic? Or Big Brother?

Cisco Logo

The first roundtable presenter at Interop New York was Cisco. Their Enterprise group always brings interesting technology to the table. This time, the one that caught my eye was the Connected Mobile Experience (CMX). CMX is a wireless mobility technology that allows a company to do some advanced marketing wizardry.

CMX uses your Cisco wireless network to monitor devices coming into the air space. They don’t necessarily have to connect to your wireless network for CMX to work. They just have to be beaconing for a network, which all devices do. CMX can then push a message to the device. This message can be a simple “thank you” for coming or something more advanced like a coupon or notification to download a store specific app. CMX can then store the information about that device, such as whether or not they joined the network, where they went, and how long they were there. This gives the company to pull some interesting statistics about their customer base. Even if they never hop on the wireless network.

I have to be honest here. This kind of technology gives me the bit of the creeps. I understand that user tracking is the hot new thing in retail. Stores want to know where you went, how long you stayed there, and whether or not you saw an advertisement or a featured item. They want to know your habits so as to better sell to you. The accumulation of that data over time allows for some patterns to emerge that can drive a retail operation’s decision making process.

A Thought Exercise

Think about an average person. We’ll call him Mike. Mike walks four blocks from his office to the subway station every day after work. He stops at the corner about halfway between to cross a street. On that street just happens to be a coffee shop using something like CMX. Mike has a brand new phone that uses wifi and bluetooth and Mike keeps them on all the time. CMX can detect when the device comes into range. It knows that Mike stays there for about 2 minutes but never joins the network. It then moves out of the WLAN area. The data cruncher for the store wants to drive new customers to the store. They analyze the data and find that lots of people stay in the area for a couple of minutes. They equate this to people stopping to decide if they want to have a cup of coffee from the shop. They decide to create a CMX coupon push notification that pops up after one minute on devices that have been seen in the database for the last month. Mike will see a coupon for $1 off a cup of coffee the next time he waits for the light in front of the coffee shop.

That kind of reach is crazy. I keep thinking back to the scenes in Minority Report where the eye scanners would detect you looking at an advertisement and then target a specific ad based on your retina scan. You may say that’s science fiction. But with products like CMX, I can build a pretty complete profile of your behavior even if I don’t have a retina scan. Correlating information provides a clear picture of who you are without any real identity information. Knowing that someone likes to spend their time in the supermarket in the snack aisles and frozen food aisles and less time in the infants section says a lot. Knowing the route a given device takes through the store can help designers place high volume items in the back and force shoppers to take longer routes past featured items.


Tom’s Take

I’m not saying that CMX is a bad product. It’s providing functionality that can be of great use to retail companies. But, just like VHS recorders and Bittorrent, good ideas can often be used to facilitate things that aren’t as noble. I suggested to the CMX developers that they could implement some kind of “opt out” message that popped up if I hadn’t joined the wireless network in a certain period of time. I look at that as a way of saying to shoppers “We know you aren’t going to join. Press the button and we’ll wipe our your device info.” It puts people at ease to know they aren’t being tracked. Even just showing them what you’re collecting is a good start. With the future of advertising and marketing focusing on instant delivery and data gathering for better targeting, I think the products like CMX will be powerful additions. But, great power requires even greater responsibility.

Tech Field Day Disclaimer

Cisco was a presenter at the Tech Field Day Interop Roundtable.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.

Why An iPhone Fingerprint Scanner Makes Sense

silver-apple-thumb

It’s hype season again for the Cupertino Fruit and Phone Company.  We are mere days away from a press conference that should reveal the specs of a new iPhone, likely to be named the iPhone 5S.  As is customary before these events, the public is treated to all manner of Wild Mass Guessing as to what will be contained in the device.  WIll it have dual flashes?  Will it have a slow-motion camera?  NFC? 802.11ac?  The list goes on and on.  One of the most spectacular rumors comes in a package the size of your thumb.

Apple quietly bought a company called AuthenTec last year.  AuthentTec made fingerprint scanners for a variety of companies, including those that included the technology in some Android devices.  After the $365 million acquisition, AuthenTec disappeared into a black hole.  No one (including Apple) said much of anything about them.  Then a few weeks ago, a patent application was revealed that came from Apple and included fingerprint technology from AuthenTec.  This sent the rumor mill into overdrive.  Now all signs point to a convex sapphire home button that contains a fingerprint scanner that will allow iPhones to use biometrics for security.  A developer even managed to ferret out a link to a BiometrickKitUI bundle in one of the iOS 7 beta releases (which was quickly removed in the next beta).

Giving Security The Finger

I think adding a fingerprint scanner to the hardware of an iDevice is an awesome idea.  Passcode locks are good for a certain amount of basic device security, but the usefulness of a passcode is inversely proportional to it’s security level.  People don’t make complex passcodes because they take far too long to type in.  If you make a complex alphanumeric code, typing the code in quickly one-handed isn’t easy.  That leaves most people choosing to use a 4-digit code or forgoing it altogether.  That doesn’t bode well for people whose phones are lost or stolen.

Apple has already publicly revealed that it will include enhanced security in iOS 7 in the form of an activation lock that prevents a thief from erasing the phone and reactivating it for themselves.  This makes sense in that Apple wants to discourage thieves.  But that step only makes sense if you consider that Apple wants to beef up the device security as well.  Biometric fingerprint scanners are a quick method of inputting a unique unlock code quickly.  Enabling this technology on a new phone should show a sharp increase in the number of users that have enabled an unlock code (or finger, in this case).

Not all people thing fingerprint scanners are a good idea.  A link from Angelbeat says that Apple should forget about the finger and instead use a combination of picture and voice to unlock the phone.  The writer says that this would provide more security because it requires your face as well as your voice.  The writer also says that it’s more convenient than taking a glove off to use a finger in cold weather.  I happen to disagree on a couple of points.

A Face For Radio

Facial recognition unlock for phones isn’t new.  It’s been in Android since the release of Ice Cream Sandwich.  It’s also very easy to defeat.  This article from last year talks about how flaky the system is unless you provide it several pictures to reference from many different angles.  This video shows how snapping a picture on a different phone can easily fool the facial recognition.  And that’s only the first video of several that I found on a cursory search for “Android Facial Recognition”.  I could see this working against the user if the phone is stolen by someone that knows their target.  Especially if there is a large repository of face pictures online somewhere.  Perhaps in a “book” of “faces”.

Another issue I have is Siri.  As far as I know, Siri can’t be trained to recognize a users voice.  In fact, I don’t believe Siri can distinguish one user from another at all.  To prove my point, go pick up a friend’s phone and ask Siri to find something.  Odds are good Siri will comply even though you aren’t the phone’s owner.  In order to defeat the old, unreliable voice command systems that have been around forever, Apple made Siri able to recognize a wide variety of voices and accents.  In order to cover that wide use case, Apple had to sacrifice resolution of a specific voice.  Apple would have to build in a completely new set of Siri APIs that query a user to speak a specific set of phrases in order to build a custom unlock code.  Based on my experience with those kinds of old systems, if you didn’t utter the phrase exactly the way it was originally recorded it would fail spectacularly.  What happens if you have a cold?  Or there is background noise?  Not exactly easier that putting your thumb on a sensor.

Don’t think that means that fingerprints are infallible.  The Mythbusters managed to defeat an unbeatable fingerprint scanner in one episode.  Of course, they had access to things like ballistics gel, which isn’t something you can pick up at the corner store.  Biometrics are only as good as the sensors that power them.  They also serve as a deterrent, not a complete barrier.  Lifting someone’s fingerprints isn’t easy and neither is scanning them into a computer to produce a sharp enough image to fool the average scanner.  The idea is that a stolen phone with a biometric lock will simply be discarded and a different, more vulnerable phone would be exploited instead.


Tom’s Take

I hope that Apple includes a fingerprint scanner in the new iPhone.  I hope it has enough accuracy and resolution to make biometric access easy and simple.  That kind of implementation across so many devices will drive the access control industry to take a new look at biometrics and being integrating them into more products.  Hopefully that will spur things like home door locks, vehicle locks, and other personal devices to being using these same kind of sensors to increase security.  Fingerprints aren’t perfect by any stretch, but they are the best option of the current generation of technology.  One day we may reach the stage of retinal scanners or brainwave pattern matches for security locks.  For now, a fingerprint scanner on my phone will get a “thumbs up” from me.

Why I Won’t Be Getting Google Glass

google_glass_grey-580-90

You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.

About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.

I won’t be participating in generation one of Google Glass.

I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.

From Entry to Profit

Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time

Eye of the Beholder

One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.

Mind the Leash

According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command.  Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it?  There’s the rub in the first generation Glass units.  You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot.  I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype.  Many will stop me here and interject that WiFi hotspot access is fairly common now.  All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace.  How does that work from a mountain top?  What if I had a video that I wanted to post right away from the middle of the ocean?  How exactly do you livestream video while skydiving over the Moscone Center during Google I/O?  Here’s a hint:  You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air.  Not as magical when you strip all the layers away.  For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter.  Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.

Tom’s Take

Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.

If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com

iOS 7 and Labels

wwdc-13-apple-reveals-ios-7

Apple is prepping the release of iOS 7 to their users sometime in the next couple of months. The developers are already testing it out to find bugs and polish their apps in anticipation of the user base adopting Jonathan Ive‘s vision for a mobile operating system. In many ways, it’s still the same core software we’ve been using for many years now with a few radical changes to the look and feel. The icons and lack of skeumorphism are getting the most press. But I found something that I think has the ability to be even bigger than that.

The user interface (UI) elements in the previous iOS builds all look very similar. This is no doubt due to the influence of Scott Forestall, the now departed manager of iOS. The dearth of glossy buttons and switches looked gorgeous back in 2007 when the iPhone was first released. But all UI evolves over time. Some evolve faster than others. Apple hit a roadblock because of those very same buttons. They were all baked into the core UI. Changing them was like trying to correct a misspelled word in a stone carving.  It takes months of planning to make even the smallest of changes.  And those changes have to be looked at on a massive scale to avoid causing issues in the rest of the OS.

iOS 7 is different to me.  Look at this pic of an incoming call and compare it with the same screen in iOS 6:

iOS 7

iOS 7

iOS 6

iOS 6

The iOS 6 picture has buttons.  The iOS 7 picture is different.  Instead of have chiseled buttons, it looks like the Answer and Decline buttons have been stuck to the screen with labels.  That’s not the only place in the UI that has a label-like appearance.  Sending a new  iMessage or text to someone in the Messages app looks like applying a stamp to a piece of paper.  Taking all that into consideration, I think I finally understand what Ive is trying to do with this UI shift in iOS 7

Labels are easy to reapply.  You just peel them off and stick them back on.  Unlike the chiseled-in-stone button UI, a label can quickly and easily be reconfigured or replaced if it starts to look dated.  Apple made mention of this in Ive’s iOS 7 video where he talked about creating “hierarchical layers (to) establish order“.  Ive commented that this approach gives depth to the OS.  I think he’s holding back on us.

Jonathan Ive created UI layers in the OS so he can change them out more quickly.  Think about it.  If you only have to change a label in an app or change the way they are presented on screen, it allows you to make more rapid changes to the way the OS looks.  If the layers are consistent and draw from the same pool of resources, it allows you to skin the OS however you want with minimal effort.  Ive wasn’t just trying to scrub away the accumulation of Scott Forrestal’s ideas about the UI.  He wanted to change them and make the UI so flexible that the look can be updated in the blink of an eye.  That gives him the ability to change elements at will without the need to overhaul the system.  That kind of rapid configurability gives Apple the chance to keep things looking fresh and accommodate changing tastes.


Tom’s Take

I can almost hear people now saying that making future iOS releases able to be skinned is just another rip off of Android’s feature set.  In some ways, you are very right.  However, consider that Android was always designed with modularity in mind from the beginning.  Google wanted to give manufacturers and carriers the ability to install their own UI.  Think about how newsworthy the announcement of a TouchWiz-free Galaxy S4 was.  Apple has always considered the UI inviolate in all their products.  You don’t have much freedom to change things in iOS or in OS X.  Jonathan Ive is trying to set things up so that changes can be made more frequently in iOS.  Modders will likely find ways to insert their own UI elements and take these ideas in an ever more radical direction.  And all because Apple wanted to be able to peel off their UI pieces as easily as a label.

The Microsoft Office Tablet

OfficeTabletI’ve really tried to stay out of the Tablet Wars.  I have a first generation iPad that I barely use any more.  My kids have co-opted it from me for watching on-demand TV shows and playing Angry Birds.  Since I spend most of my time typing blog posts or doing research, I use my laptop more than anything else.  When the Surface RT and Surface Pro escaped from the wilds of Redmond I waited and watched.  I wanted to see what people were going to say about these new Microsoft tablets.  It’s been about 4 months since the release of the Surface Pro and simliar machines from vendors like Dell and Asus.  I’ve been slowly asking questions and collecting information about these devices.  And I think I’ve finally come to a realization.

The primary reason people want to buy a Surface tablet is to run Microsoft Office.

Here’s the setup.  Everyone that expressed an interest in the Pro version of the Surface (or the Latitude 10 from Dell) was asked a question by me: What is the most compelling feature for the Surface Pro for you?  The responses that I got back were overwhelming in their similarity.

1.  I want to use Microsoft Office on my tablet.

2.  I want to run full Windows apps on my tablet.

I never heard anything about portability, power, user interface, or application support (beyond full Windows apps).  I specifically excluded the RT model of the Surface from my questions because of the ARM processor and the reliance of software from the Windows App Store.  The RT functions more like Apple/Android tablets in that regard.

This made me curious.  The primary goal of Surface users is to be able to run Office?  These people have basically told me that the only reason they want to buy a tablet is to use an office suite.  One that isn’t currently available anywhere else for mobile devices.  One that has been rumored to be released on other platforms down the road.  While it may be a logical fallacy, it appears that Microsoft risks invalidating a whole hardware platform because of a single application suite.  If they end up releasing Office for iOS/Android, people would flee from the Surface to the other platforms according to the info above.  Ergo, the only purpose of the Surface appears to be to run one application.  Which I why I’ve started calling it the Microsoft Office Tablet.  Then I started wondering about the second most popular answer in my poll.

Making Your Flow Work

As much as I’ve tried not to use the word “workflow” before, I find that it fits in this particular conversation.  Your workflow is more than just the applications you utilize.  It’s how you use them.  My workflow looks totally different from everyone else even though I use simliar applications.  I use email and word processing for my own purposes.  I write a lot, so a keyboard of some kind is important to my workflow.  I don’t do a lot of graphics design, so a pen input tablet isn’t really a big deal to me.  The list goes on and on, but you see that my needs are my own and not those of someone else.  Workflows may be simliar, but not identical.  That’s where the dichotomy comes into play for me.

When people start looking at using a different device for their workflow, they have to make adjustments of some kind.  Especially if that device is radically different from one they’ve been using before.  Your phone is different from a tablet, and a tablet is different from a laptop.  Even a laptop is different from a desktop, but these two are more simliar than most.  When the time comes to adjust your workflow to a new device, there are generally two categories of people:

1.  People who adjust their workflow to the new device.

2.  People who expect the device to conform to their existing workflow.

For users of the Apple and Android tablets, option 1 is pretty much the only option you’ve got.  That’s because the workflow you’ve created likely can’t be easily replicated between devices.  Desktop apps don’t run on these tablets.  When you pick up an iPad or a Galaxy Tab you have to spend time finding apps to replicate what you’ve been doing previously.  Note taking apps, web browsing apps, and even more specialized apps like banking or ebook readers are very commonly installed.  Your workflow becomes constrained to the device you’re using.  Things like on-screen keyboards or lack of USB ports become bullet points in workflow compatibility.  On occasion, you find that a new workflow is possible with the device.  The prime example I can think of is using the camera on a phone in conjunction with a banking app to deposit checks without needing to take them into the bank.  That workflow would have been impossible just a couple of years ago.  With the increase in camera phone resolution, high speed data transfer, and secure transmission of sensitive data made possible by device advancements we can now picture this new workflow and easily adapt it because a device made it possible.

The other category is where the majority of Surface Pro users come in.  These are the people that think their workflow must work on any device they use.  Rather than modify what they’re doing, they want the perfect device to do their stuff.  These are the people that use a tablet for about a week and then move on to something different because “it just didn’t feel right.”  When they finally do find that magical device that does everything they want, they tend to abandon all other devices and use it exclusively.  That is, until they have a new workflow or a substantial modification to their existing workflow.  Then they go on the hunt for a new device that’s perfect for this workflow.

So long as your workflow is the immutable object in the equation, you are never going to be happy with any device you pick.  My workflows change depending on my device.  I browse Twitter and read email from my phone but rarely read books.  I read books and do light web surfing from a tablet but almost never create content.  I spend a lot of time creating content on my laptop buy hate reading on it.  I’ve adjusted my workflows to suit the devices I’m using.

If the single workflow you need to replicate on your table revolves around content creation, I think it’s time to examine exactly what you’re using a tablet for.  Is it portability beyond what a laptop can offer?  Do you prefer to hunt and peck around a touch screen instead of a keyboard?  Are you looking for better battery life or some other function of the difference in hardware?  Or are you just wanting to look cool with a tablet in the “post PC world?”  That’s the primary reason I don’t use a tablet that much any more.  My workflows conform to my phone and my laptop.  I don’t find use in a tablet.  Some people love them.  Some people swear by them.  Just make sure you aren’t dropping $800-$1000 on a one-application device.

At the end of the day, work needs to get done.  People are going to use whatever device they want to use to get their stuff done.  Some want to do stuff and move on.  Others want to look awesome doing stuff or want to do their stuff everywhere no matter what.  Use what works best for you.  Just don’t be surprised if complaining about how this device doesn’t run my favorite data entry program gets a sideways glance from IT.

Disclaimer:  I own a first generation iPad.  I’ve tested a Dell Latitude 10.  I currently use an iPhone 4S.  I also use a MacBook Air.  I’ve used a Lenovo Thinkpad in the past as my primary workstation.  I’m not a hater of Microsoft or a lover of Apple.  I’ve found a setup that lets me get my job done.

BYOD vs MDM – Who Pays The Bill?

Generic Mobile Devices

There’s a lot of talk around now about the trend of people bringing in their own laptops and tablets and other devices to access data and do their jobs.  While most of you (including me) call this Bring Your Own Device (BYoD), I’ve been hearing a lot of talk recently about a different aspect of controlling mobile devices.  Many of my customers have been asking me about Mobile Device Management (MDM).  MDM is getting mixed into a lot of conversations about controlling the BYoD explosion.

Mobile Device Management (MDM) refers to the process of controlling the capabilities of a device via a centralized control point, whether it be in the cloud or on premises.  MDM can restrict functions of a device, such as the camera or the ability to install applications.  It can also restrict which data can be downloaded and saved onto a device.  MDM also allows device managers to remotely lock the device in the event that it is lost or even remotely wipe the device should recovery be impossible.  Vendors are now pushing MDM is a big component of their mobility offerings.  Every week, it seems like some new vendor is pushing their MDM offering, whether it be a managed service software company, a wireless access point vendor, or even a dedicated MDM provider.  MDM is being pushed as the solution to all your mobility pain points.  There’s one issue though.

MDM is a very intrusive solution for mobile devices.  A good analogy might be the rules you have for your kids at home.  There are many things they are and aren’t allowed to do.  If they break the rules, there are consequences and possible punishments.  Your kids have to follow your rules if they live under your roof.  Such is the way for MDM as well.  Most MDM vendors that I’ve spoken to in the last three months take varying degrees of intrusion to the devices.  One Windows Mobile provider started their deployment process with a total device wipe before loading an approved image onto the mobile device.  Others require you to trust specific certificates or enroll in special services.  If you run Apple’s iOS and designate the device as a managed device in iOS 6 to get access to certain new features like the global proxy setting, you’ll end up having a wiped device before you can manage it.  Services like MobileIron can even give administrators the ability to read any information on the device, regardless of whether it’s personal or not.

That level of integration into a device is just too much for many people bringing their personal devices into a work environment.  They just want to be able to check their email from their phone.  They don’t want a sneaky admin reading their text messages or even wiping their entire phone via a misconfigured policy setting or a mistaken device loss.  Could you image losing all your pictures or your bank account info because Exchange had a hiccup?  And what about pushing MDM polices down to disable your camera due to company policy or disable your ability to make in-app purchases from your app repository of choice?  How about setting a global proxy server so you are restricted from browsing questionable material from the comfort of your own home?  If you’re like me, any of those choices make me cringe a little.

That’s why BYoD polices are important.  They function more like having your neighbor’s children over at your house.  While you may have rules for your children, the neighbor’s kids are just vistors.  You can’t really punish them like you’d punish your own kids.  Instead, you make what rules you can to prevent them from doing things they aren’t supposed to do.  In many cases, you can send the neighbor’s kids to a room with your own kids to limit the damage they can cause.  This is very much in line with the way we treat devices with BYoD settings.  We try to authenticate users to ensure they are supposed to be accessing data on our network.  We place data behind access lists that try to determine location or device type.  We use the network as the tool to limit access to data as opposed to intruding on the device.

Both BYoD and MDM are needed in a corporate environment to some degree. The key to figuring out which needs to be applied where can be boiled down to one easy question:

Who paid for your device?

If the user bought their device, you need to be exploring BYoD polices as your primary method of securing the network and enabling access.  Unless you have a very clearly defined policy in place for device access, you can’t just assume you have the right to disable half a user’s device functions and then wipe it whenever you feel the need.  Instead, you need to focus your efforts on setting up rules that they should follow and containing their access to your data with access lists and user authentication.  On the other hand, if the company paid for your tablet then MDM is the likely solution in mind.  Since the device belongs to the corporation, they are will within their rights to do what they would like with it.  Use it just like you would a corporate laptop or an issued Blackberry instead of a personal iPhone.  Don’t be shocked if it gets wiped or random features get turned off due to company policy.

Tom’s Take

When it’s time to decide how best to manage your devices, make sure to pull out all those old credit card receipts.  If you want to enable MDM on all your corporate phones and tablets, be sure to check out http://enterpriseios.com/ for a list of all the features supported in a given MDM provider for both iOS and other OSes like Android or Blackberry.  If you didn’t get the bill for that tablet, then you probably want to get in touch with your wireless or network vendor to start exploring the options available for things like 802.1X authentication or captive portal access.  In particular, I like some of the solutions available from Aerohive and Aruba’s ClearPass.  You’re going to want both MDM and BYoD policies in your environment to be sure your devices are as useful as possible while still being safe and protecting your network.  Just remember to back it all up with a very clear, detailed written use policy to ensure there aren’t any legal ramifications down the road from a wiped device or a lost phone causing a network penetration.  That’s one bill you can do without.

The Google Glass Ceiling

I finally got around to watching the Charlie Rose interview with Sebastian Thrun.  Thrun is behind a lot of very promising technology, the least of which is the Google Glass project.  Like many, I kind of put this out of my mind at the outset, dismissing it as a horrible fashion trend at best and a terribly complicated idea at worst.  Having seen nothing beyond the concept videos that are currently getting lots of airplay, I was really tepid about the whole concept and wanted to see it baked a little more before I really bought into the idea of carrying my smartphone around on my head instead of my hip.  Then I read another interesting piece about the future of Google and Facebook.  In and of itself, the blog post has some interesting prognostications about the directions that Facebook and Google are headed.  But one particular quote caught my eye in both the interview and the future article.  Thrun says that the most compelling use case for Google Glass right now that they can think of is for taking pictures and sharing them with people on Google+.  Charlie Rose even asked about other types of applications, like augmented reality.  Thrun dismissed these in favor of talking about how easy it was to take pictures by blinking and nodding your head.  Okay, I’m going to have to take a moment here…

Sebastian Thrun, have you lost your mind?!?

Seriously.  You have a project sitting on your ears that has the opportunity to change the way that people like me view the world and the best use case you can think of today is taking pictures of ice cream and posting it to a dying social network?  Does. Not. Compute.  Honestly, I can’t even begin to describe how utterly dumbstruck I am by this.  After spending a little more time looking into Google Glass, I’m giddy with anticipation with what I can do with this kind of idea.  However, it appears that the current guardians of the technology seem fit to shoehorn this paradigm-shifting concept into a camera case.

When I think of augmented reality applications, I think of the astronomy apps I see on the iPad that let me pick out constellations with my kids.  I can see the ones in the Southern Hemisphere just by pointing my Fruity Tablet at the ground.  Think of programs like Word Lens that allow me to instantly translate signs in a foreign language into something I can understand.  That’s the technology we have today that you can buy from the App Store.  Seriously.  No funky looking safety glasses required.  Just imagine that technology in a form factor where it’s always available without the need to take out your phone or tablet.  That’s what we can do at this very minute.  That doesn’t take much imagination at all.  Google Glass could be the springboard that launches so much more.

Imagine having an instant portal to a place like Wikipedia where all I have to do is look at an object and I can instantly find out everything I need to know about it.  No typing or dictating. All I need to do is glance at the TARDIS USB hub on my desk and I am instantly linked to the Wikipedia TARDIS page.  Take the Word Lens idea one step further.  Now, instead of only reading signs, let the microphone on the Google Glass pick up the foreign language being spoken and provide real-time translation in subtitles on the Glass UI.  Instant understanding with possibilities of translating back into the speaker’s language and display of phrases to respond with.  How about the ability to display video on the UI for things like step-by-step instructions of disassembling objects or repairing things?  I’d even love to have a Twitter feed displayed just outside my field of vision that I can scroll through with my eye movements.  That way, I can keep up with what’s going on that’s important to me without needing to lift a finger.  The possibilities are endless for something like this.  If only you can see past the ability to post pointless pictures to your Picasa account.

There are downsides to Google Glass too.  People are having a hard time interacting as it is today with the lure of instant information at their fingertips.  Imagine how bad it will be when they don’t have to make the effort of pulling out their phone.  I can see lots of issues with people walking into doors or parked cars because they were too busy paying attention to their Glass information and not as much time watching where they were walking.  Google’s web search page has made finding information a fairly trivial issue even today.  Imagine how much lazier people will be if all they have to do is glance at the web search and ask “How many ounces are in a pound?”.  Things will no longer need to be memorized, only found.  It’s like my teacher’s telling me not to be reliant on a calculator for doing math.  Now, everyone has a calculator on their phone.

Tom’s Take

In 2012, the amount of amazing technology that we take for granted astonishes me to no end.  If you had told me in the 1990s that we would have a mini computer in our pocket that has access to the whole of human knowledge and allows me to communicate with my friends and peers around the world instantly, I’d have scoffed at your pie-in-the-sky dreams.  Today, I don’t think twice about it.  I no longer need an alarm clock, GPS receiver, pocket camera, or calculator.  Sadly, the kind of thinking that has allowed technology like this to exist doesn’t appear to be applied to new concepts like Google Glass.  The powers that be at GoogleX can’t seem to understand the gold mine they’re sitting on.  Sure, maybe applying the current concepts of sharing pictures might help ease transition of new users to this UI concept.  I would hazard that people are going to understand what to do with Google Glass well beyond taking a snapshot of their lunch sushi and sharing with their Foodies circle.  Instead, show us the real groundbreaking stuff like the ideas that I’ve already discussed.  Go read some science fiction or watch movies like The Terminator, where the T-800s have a very similar UI to what you’re developing.  That’s where people want to see the future headed.  Not reinventing the Polaroid camera for the fifth time this year.  And if you’re having that much trouble coming up with cool ideas or ways to sell Google Glass to the nerds out there today, give me a call.  I can promise you we’ll blast through that glass ceiling you’ve created for yourself like the SpaceX Dragon lifting off for the first time.  I may not be able to code as well as other people at GoogleX, but I can promise you I’ve got the vision for your project.