The IPv6 Revolution Will Not Be Broadcast

IPv6Revolution

There are days when IPv6 proponents have to feel like Chicken Little. Ever since the final allocation of the last /8s to the RIRs over four years ago, we’ve been saying that the switch to IPv6 needs to happen soon before we run out of IPv4 addresses to allocate to end users.

As of yesterday, ARIN (@TeamARIN) has 0.07 /8s left to allocate to end users. What does that mean? Realistically, according to this ARIN page that means there are 3 /21s left in the pool. There are around 450 /24s. The availability of those addresses is even in doubt, as there are quite a few requests in the pipeline. I’m sure ARIN is now more worried that they have recieved a request that they can’t fulfill and it’s already in their queue.

The sky has indeed fallen for IPv4 addresses. I’m not going to sit here and wax alarmist. My stance on IPv6 and the need to transition is well known. What I find very interesting is that the transition is not only well underway, but it may have found the driver needed to see it through to the end.

Mobility For The Masses

I’ve said before that the driver for IPv6 adoption is going to be an IPv6-only service that forces providers to adopt the standard because of customer feedback. Greed is one of the two most powerful motivators. However, fear is an equally powerful motivator. And fear of having millions of mobile devices roaming around with no address support is an equally unwanted scenario.

Mobile providers are starting to move to IPv6-only deployments for mobile devices. T-Mobile does it. So does Verizon. If a provider doesn’t already offer IPv6 connectivity for mobile devices, you can be assured it’s on their roadmap for adoption soon. The message is clear: IPv6 is important in the fastest growing segment of device adoption.

Making mobile devices the sword for IPv6 adoption is very smart. When we talk about the barriers to entry for IPv6 in the enterprise we always talk about outdated clients. There are a ton of devices that can’t or won’t run IPv6 because of an improperly built networking stack or software that was written before the dawn of DOS. Accounting for those systems, which are usually in critical production roles, often takes more time than the rest of the deployment.

Mobile devices are different. The culture around mobility has created a device refresh cycle that is measured in months, not years. Users crave the ability to upgrade to the latest device as soon as it is available for sale. Where mobile service providers used to make users wait 24 months for a device refresh, we now see them offering 12 month refreshes for a significantly increased device cost. Those plans are booming by all indications. Users want the latest and greatest devices.

With the desire of users to upgrade every year, the age of the device is no longer a barrier to IPv6 adoption. Since the average age of devices in the wild is almost certain to be less than 3 years old providers can also be sure that the capability is there for them to support IPv6. That makes it much easier to enable support for it on the entire install base of handsets.

The IPv6 Trojan Horse

Now that providers have a wide range of IPv6-enabled devices on their networks, the next phase of IPv6 adoption can sneak into existence. We have a lot of IPv6-capable devices in the world, but very little IPv6 driven content. Aside from some websites being reachable over IPv6 we don’t really have any services that depend on IPv6.

Thanks to mobile, we have a huge install base of devices that we now know are IPv6 capable. Since the software for these devices is largely determined by the user base through third party app development, this is the vector for widespread adoption of IPv6. Rather than trumpeting the numbers, mobile providers and developers can quiety enable IPv6 without anyone even realizing it.

Most app resources must live in the cloud by design. Lots of them live in places like AWS. Service providers enable translation gateways at their edge to translate IPv6 requests into IPv4 requests. What would happen if the providers started offering native IPv6 connectivity to AWS? How would app developers react if there was a faster, native connetivity option to their resources? Given the huge focus on speed for mobile applications, do you think they would continue using a method that forces them to use slow translation devices? Or would they jump at the chance to speed up their devices?

And that’s the trojan horse. The app itself spurs adoption of IPv6 without the user even knowing what’s happened. When’s the last time you needed to know your IP on a mobile device? Odds are very good it would take you a while to even find out where that information is stored. The app-driven focus of mobile devices has eliminated the need for visibility for things like IP addresses. As long as the app connects, who cares what addressing scheme it’s using? That makes shifting the underlying infrastructure from IPv4 to IPv6 fairly inconsequential.


Tom’s Take

IPv6 adoption is going to happen. We’ve reached the critical tipping point where the increased cost of acquiring IPv4 resources will outweigh the cost of creating IPv6 connectivity. Thanks to the focus on mobile technologies and third-party applications, the IPv6 revolution will happen quietly at night when IPv6 connectivity to cloud resources becomes a footnote in some minor point update release notes.

Once IPv6 connectity is enabled and preferred in mobile applications, the adoption numbers will go up enough that CEOs focused on Gartner numbers and keeping up with the Joneses will finally get off their collective laurels and start pushing enteprise adoption. Only then will the analyst firms start broadcasting the revolution.

Are We The Problem With Wearables?

applewatchface
Something, Something, Apple Watch.

Oh, yeah. There needs to be substance in a wearable blog post. Not just product names.

Wearables are the next big product category that is driving innovation. The advances being made in screen clarity, battery life, and component miniaturization are being felt across the rest of the device market. I doubt Apple would have been able to make the new Macbook logic board as small as it is without a few things learned from trying to cram transistors into a watch case. But, are we the people sending the wrong messages about wearable technology?

The Little Computer That Could

If you look at the biggest driving factor behind technology today, it comes down to size. Technology companies are making things smaller and lighter with every iteration. If the words thinnest and lightest don’t appear in your presentation at least twice then you aren’t on the cutting edge. But is this drive because tech companies want to make things tiny? Or is it more that consumers are driving them that way?

Yes, people the world over are now complaining that technology should have other attributes besides size and weight. A large contingent says that battery life is now more important than anything else. But would you be okay with lugging around an extra pound of weight that equates to four more hours of typing time? Would you give up your 13-inch laptop in favor of a 17-inch model if the battery life were doubled?

People send mixed signals about the size and shape of technology all the time. We want it fast, small, light, powerful, and with the ability to run forever. Tech companies give us as much as they can, but tradeoffs must be made. Light and powerful usually means horrible battery life. Great battery life and low weight often means terrible performance. No consumer has ever said, “This product is exactly what I wanted with regards to battery, power, weight, and price.”

Where Wearables Dare

As Jonny Ive said this week, “The keyboard dictated the size of the new Macbook.” He’s absolutely right. Laptops and Desktops have a minimum size that is dictated by the screen and keyboard. Has anyone tried typing on a keyboard cover for and iPad? How about an iPad Mini cover? It’s a miserable experience, even if you don’t have sausage fingers like me. When the size of the device dictates the keyboard, you are forced to make compromises that impact user experience.

With wearables, the bar shifts away from input to usability. No wearable watch has a keyboard, virtual or otherwise. Instead, voice control is the input method. Spoken words drive communication beyond navigation. For some applications, like phone calls and text messages, this is preferred. But I can’t imagine typing a whole blog post or coding on a watch. Nor should I. The wearable category is not designed for hard-core computing use.

That’s where we’re getting it wrong. Google Glass was never designed to replace a laptop. Apple Watch isn’t going to replace an iPhone, let alone an iMac. Wearable devices augment our technology workflows instead of displacing them. Those fancy monocles you see in sci-fi movies aren’t the entire computer. They are just an interface to a larger processor on the back end. Trying to shrink a laptop to the size of a silver dollar is impossible. If it were, we’d have that by now.

Wearables are designed to give you information at a glance. Google Glass allows you to see notifications easily and access information. Smart watches are designed to give notifications and quick, digestible snippets of need-to-know information. Yes, you do have a phone for that kind of thing. But my friend Colin McNamara said it best:

I can glance at my watch and get a notification without getting sucked into my phone


Tom’s Take

That’s what makes the wearable market so important. It’s not having the processing power of a Cray supercomputer on your arm or attached to your head. It’s having that power available when you need it, yet having the control to get information you need without other distractions. Wearables free you up to do other things. Like building or creating or simply just paying attention to something. Wearables make technology unobtrusive, whether it’s a quick text message or tracking the number of steps you’ve taken today. Sci-Fi is filled with pictures of amazing technology all designed to do one thing – let us be human beings. We drive the direction of product development. Instead of crying for lighter, faster, and longer all the time, we should instead focus on building the right interface for what we need and tell the manufacturers to build around that.

 

The Slippery Slope of Social Sign-In

FBTalons

At the recent Wireless Field Day 6, we got a chance to see a presentation from AirTight Networks about their foray into Social Wifi. The idea is that business can offer free guest wifi for customers in exchange for a Facebook Like or by following the business on Twitter. AirTight has made the process very seamless by allowing an integrated Facebook login button. Users can just click their way to free wifi.

I’m a bit guarded about this whole approach. It has nothing to do with AirTight’s implementation. In face, several other wireless companies are racing to have similar integration. It does have everything to do with the way that data is freely exchanged in today’s society. Sometimes more freely than it should.

Don’t Forget Me

Facebook knows a lot about me. They know where I live. They know who my friends are. They know my wife and how many kids we have. While I haven’t filled out the fields, there are others that have indicated things like political views and even more personal information like relationship status or sexual orientation. Facebook has become a social data dump for hundreds of millions of people.

For years, I’ve said that Facebook holds the holy grail of advertising – an searchable database of everything a given demographic “likes”. Facebook knows this, which is why they are so focused on growing their advertising arm. Every change to the timeline and every misguided attempt to “fix” their profile security has a single aim: convincing business to pay for access to your information.

Now, with social wifi, those business can get access to a large amount of data easily. When you create the API integration with Facebook, you can indicate a large number of discreet data points easily. It’s just a bunch of checkboxes. Having worked in IT before, I know the siren call that could cause a business owner to check every box he could with the idea that it’s better to collect more data rather than less. It’s just harmless, right?

Give It Away Now

People don’t safeguard their social media permissions and data like they should. If you’ve ever gotten DM spam from a follower or watched a Facebook wall swamped with “on behalf of” postings you know that people are willing to sign over the rights to their accounts for a 10% discount coupon or a silly analytics game. And that’s after the warning popup telling the user what permissions they are signing away. What if the data collection is more surreptitious?

The country came unglued when it was revealed that a government agency was collecting metadata and other discreet information about people that used online services. The uproar led to hearings and debate about how far reaching that program was. Yet many of those outraged people don’t think twice about letting a coffee shop have access to a wealth of data that would make the NSA salivate.

Providers are quick to say that there are ways to limit how much data is collected. It’s trivial to disable the ability to see how many children a user has. But what if that’s the data the business wants? Who is to say that Target or Walmart won’t collect that information for an innocent purpose today only to use it to target advertisements to users at a later date. That’s the exact kind of thing that people don’t think about.

Big data and our analytic integrations are allowing it to happen with ease today. The abundance of storage means we can collect everything and keep it forever without needing to worry about when we should throw things away. Ubiquitous wireless connectivity means we are never truly disconnected from the world. Services that we rely on to tell us about appointments or directions collect data they shouldn’t because it’s too difficult to decide how to dispose of it. It may sound a bit paranoid but you would be shocked to see what people are willing to trade without realizing.


Tom’s Take

Given the choice between paying a few dollars for wifi access or “liking” a company’s page on Facebook, I’ll gladly fork over the cash. I’d rather give up something of middling value (money) instead of giving up something more important to me (my identity). The key for vendors investigating social wifi is simple: transparency. Don’t just tell me that you can restrict the data that a business can collect. Show me exactly what data they are collecting. Don’t rely on the generalized permission prompts that Facebook and Twitter provide. If business really want to know how I voted in the last election then the wifi provider has a social responsibility to tell me that before I sign up. If shady businesses are forced to admit they are overstepping their data collection bounds then they might just change their tune. Let’s make technology work to protect our privacy for once.

Linux Lost The Battle But Won The War

I can still remember my first experience with Linux.  I was an intern at IBM in 2001 and downloaded the IBM Linux Client for e-Business onto a 3.5″ floppy and set about installing it to a test machine in my cubicle.  It was based on Red Hat 6.1.  I had lots of fun recompiling kernels, testing broken applications (thanks Lotus Notes), and trying to get basic hardware working (thanks deCSS).  I couldn’t help but think at the time that there was great potential in the software.

I’ve played with Linux on and off for the last twelve years.  SuSE, Novell, Ubuntu, Gentoo, Slackware, and countless other distros too obscure to rank on Google.  Each of them met needs the others didn’t.  Each tried to unseat Microsoft Windows as the predominant desktop OS.  Despite a range of options and configurability, they never quite hit the mark.  I think every year since 2005 has been the “Year of Desktop Linux”.  Yet year after year I see more Windows laptops out there and very few being offered with Linux installed from the factory.  It seems as though Linux might not ever reach the point of taking over the desktop.  Then I saw a chart that forced me to look at the battle in a new perspective:

AndroidDominance

Consider that Android is based on kernel version 3.4 with some Google modifications.  That means it runs Linux under the hood, even if the interface doesn’t look anything like KDE or GNOME.  And it’s running on millions of devices out there.  Phones and tablets in the hands of consumers world wide.  Linux doesn’t need to win the desktop battle any more.  It’s already ahead in the war for computing dominance.

It happened not because Linux was a clearly superior alternative to Windows-based computing.  It didn’t happen because users finally got fed up with horrible “every other version” nonsense from Redmond.  It happened because Linux offered something Windows has never been able to give developers – flexibility.

I’ve said more than once that the inherent flexibility of Linux could be considered a detriment to desktop dominance.  If you don’t like your window manager you can trade it out.  Swap GNOME for xfce or KDE if you prefer something different.  You can trade filesystems if you want.  You can pull out pieces of just about everything whenever you desire, even the kernel.  Without the mantra of forcing the user to accept what’s offered, people not only swap around at the drop of a hat but are also free to spin their own distro whenever they want.  As of this writing, Ubuntu has 72 distinct projects based on the core distro.  Is it a wonder why people have a hard time figuring out what to install?

Android, on the other hand, has minimal flexibility when it comes to the OS.  Google lets the carriers put their own UI customizations in place, and the hacker community has spun some very interesting builds of their own.  But the rank and file mobile device user isn’t going to go out and hack their way to OS nirvana.  They take what’s offered and use it in their daily computing lives.  Android’s development flexibility means it can be installed on a variety of hardware, from low end mobile phones to high end tablets.  Microsoft has much more stringent rules for hardware running their mobile OS.  Android’s licensing model is also a bit more friendly (it’s hard to beat free).

If the market is really driving toward a model of mobile devices replacing larger desktop computing, then Android may have given Linux the lead that it needs in the war for computing dominance.  Linux is already the choice for appliance computing.  Virtualization hypervisors other than Hyper-V are either Linux under the hood or owe much of their success to Linux.  Mobile devices are dominated by Linux.  Analysts were so focused on how Linux was a subpar performer when it came to workstation mindshare that they forgot to see that the other fronts in the battle were being quietly lost by Microsoft.


Tom’s Take

I’m not going to jump right out there and say that Linux is going to take over the desktop any time soon.  It doesn’t have to.  With the backing of Google and Android, it can quietly keep right on replacing desktop machines as they die off and mobile devices start replicating that functionality.  While I spend time on my old desktop PC now, it’s mostly for game playing.  The other functions that I use computers for, like email and web surfing, are slowly being replaced by mobile devices.  Whether or not you realize it, Linux and *BSD make up a large majority of the devices that people use in every day computing.  The hears and minds of the people were won by Linux without unseating the king of the desktop.  All that remains is to see how Microsoft chooses to act.  With a lead like the one Android has already in the mobile market, the war might be over before we know it.

The Compost-PC Era

Generic Mobile Devices

I realized the other day that the vibration motor in my iPhone 5s had gone out.  Thankfully, my device was still covered under warranty.  I set up an appointment to have it fixed at the nearest Apple store.  I figured I’d go in and they’d just pop in a new motor.  It is a simple repair according to iFixit.  I backed my phone up one last time as a precaution.  When I arrived at the store, it took no time to determine what was wrong.

What shocked me was that the Genius tech told me, “We’re just going to replace your whole phone.  We’ll send the old one off to get repaired.”  I was taken aback.  This was a $20 part that should have taken all of five minutes to pop in.  Instead, I got my phone completely replaced after just three months.  As the new phone synced from my last iClould backup, I started thinking about what this means for the future of devices.

Bring Your Own Disposable

Most mobile devices are a wonder of space engineering.  Cramming an extra long battery in with a vibrant color screen and enough storage to satisfy users is a challenge in any device.  Making it small enough and light enough to hold in the palm of your hand is even more difficult.  Compromises must be made.  Devices today are held together as much by glue and adhesive as they are nuts and bolts and screws.  Gaining access to a device to repair a broken part is becoming more and more impossible with each new generation.

I can still remember opening the case on my first PC to add a sound card and an Overdrive processor.  It was a bit scary but led to a career in repairing computers.  I’ve downright terrified to pop open an iPhone.  The ribbon cables are so fragile that it doesn’t take much to render the phone unusable.  Even Apple knows this.  They are much more likely to have the repairs done in a separate facility rather than at the store.  Other than screen replacements, the majority of broken parts result in a new phone being given to the customer.  After all, it’s very easy to replace devices when the data is safe somewhere.

The Cloud Will Save It All

Use of cloud storage and backup is the key to the disposable device trend.  If you tell me that I’m going to lose my laptop and all the data on it I’m going to get a little concerned.  If you tell me that I’m going to lose my phone, I don’t mind as much thanks to the cloud backup I have configured.  In the above case, my data was synced back to my phone as I shopped for a new screen protector.  Just like a corporate system, data loss is the biggest concern on a device.  Cloud storage is a lot like a roaming profile.  I can sync that data back to a fresh device and keep going after a short interruption.  Gone are the wasted hours of reinstallation of operating system and software.

Why repair devices when they can easily be replaced at little cost?  Why should you pay someone to spend their time diagnosing a bad CPU or bad RAM when you can just unwrap a new mobile device, sync your profile and data, and move on with your project?  The implications for PC repair techs are legion.  As are the implications for manufacturers that create products that are easy to open and contain field replaceable parts.

Why go to all the extra effort of making a device that can be easily repaired if it’s much cheaper to just glue it together and recycle what parts you can after it breaks?  Customers have already shown their propensity to upgrade devices with every new cycle each year.  They’d rather buy everything new instead of upgrading the old to match.  That means making the device field repairable (or upgradable) is extra cost you don’t need.  Making devices that aren’t easily fixed in the field means you need to spend less of your budgets training people how to repair them.  In fact, it’s just easier to have the customer send the device back to the manufacturing plant.


Tom’s Take

The cloud has enabled us to keep our data consistent between devices.  While it has helped blur the lines between desktop and mobile device, it has also helped blur the lines tying people to a specific device.  If I can have my phone or tablet replaced with almost no impact, I’m going to elect to have than done rather than finding replacement parts to keep the old one running just a bit longer.  It also means that after pulling the useful parts out of those mildly broken devices that they will end up in the same landfill that analysts are saying will be filled with rejected desktop PCs.

Cisco CMX – Marketing Magic? Or Big Brother?

Cisco Logo

The first roundtable presenter at Interop New York was Cisco. Their Enterprise group always brings interesting technology to the table. This time, the one that caught my eye was the Connected Mobile Experience (CMX). CMX is a wireless mobility technology that allows a company to do some advanced marketing wizardry.

CMX uses your Cisco wireless network to monitor devices coming into the air space. They don’t necessarily have to connect to your wireless network for CMX to work. They just have to be beaconing for a network, which all devices do. CMX can then push a message to the device. This message can be a simple “thank you” for coming or something more advanced like a coupon or notification to download a store specific app. CMX can then store the information about that device, such as whether or not they joined the network, where they went, and how long they were there. This gives the company to pull some interesting statistics about their customer base. Even if they never hop on the wireless network.

I have to be honest here. This kind of technology gives me the bit of the creeps. I understand that user tracking is the hot new thing in retail. Stores want to know where you went, how long you stayed there, and whether or not you saw an advertisement or a featured item. They want to know your habits so as to better sell to you. The accumulation of that data over time allows for some patterns to emerge that can drive a retail operation’s decision making process.

A Thought Exercise

Think about an average person. We’ll call him Mike. Mike walks four blocks from his office to the subway station every day after work. He stops at the corner about halfway between to cross a street. On that street just happens to be a coffee shop using something like CMX. Mike has a brand new phone that uses wifi and bluetooth and Mike keeps them on all the time. CMX can detect when the device comes into range. It knows that Mike stays there for about 2 minutes but never joins the network. It then moves out of the WLAN area. The data cruncher for the store wants to drive new customers to the store. They analyze the data and find that lots of people stay in the area for a couple of minutes. They equate this to people stopping to decide if they want to have a cup of coffee from the shop. They decide to create a CMX coupon push notification that pops up after one minute on devices that have been seen in the database for the last month. Mike will see a coupon for $1 off a cup of coffee the next time he waits for the light in front of the coffee shop.

That kind of reach is crazy. I keep thinking back to the scenes in Minority Report where the eye scanners would detect you looking at an advertisement and then target a specific ad based on your retina scan. You may say that’s science fiction. But with products like CMX, I can build a pretty complete profile of your behavior even if I don’t have a retina scan. Correlating information provides a clear picture of who you are without any real identity information. Knowing that someone likes to spend their time in the supermarket in the snack aisles and frozen food aisles and less time in the infants section says a lot. Knowing the route a given device takes through the store can help designers place high volume items in the back and force shoppers to take longer routes past featured items.


Tom’s Take

I’m not saying that CMX is a bad product. It’s providing functionality that can be of great use to retail companies. But, just like VHS recorders and Bittorrent, good ideas can often be used to facilitate things that aren’t as noble. I suggested to the CMX developers that they could implement some kind of “opt out” message that popped up if I hadn’t joined the wireless network in a certain period of time. I look at that as a way of saying to shoppers “We know you aren’t going to join. Press the button and we’ll wipe our your device info.” It puts people at ease to know they aren’t being tracked. Even just showing them what you’re collecting is a good start. With the future of advertising and marketing focusing on instant delivery and data gathering for better targeting, I think the products like CMX will be powerful additions. But, great power requires even greater responsibility.

Tech Field Day Disclaimer

Cisco was a presenter at the Tech Field Day Interop Roundtable.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.

Why An iPhone Fingerprint Scanner Makes Sense

silver-apple-thumb

It’s hype season again for the Cupertino Fruit and Phone Company.  We are mere days away from a press conference that should reveal the specs of a new iPhone, likely to be named the iPhone 5S.  As is customary before these events, the public is treated to all manner of Wild Mass Guessing as to what will be contained in the device.  WIll it have dual flashes?  Will it have a slow-motion camera?  NFC? 802.11ac?  The list goes on and on.  One of the most spectacular rumors comes in a package the size of your thumb.

Apple quietly bought a company called AuthenTec last year.  AuthentTec made fingerprint scanners for a variety of companies, including those that included the technology in some Android devices.  After the $365 million acquisition, AuthenTec disappeared into a black hole.  No one (including Apple) said much of anything about them.  Then a few weeks ago, a patent application was revealed that came from Apple and included fingerprint technology from AuthenTec.  This sent the rumor mill into overdrive.  Now all signs point to a convex sapphire home button that contains a fingerprint scanner that will allow iPhones to use biometrics for security.  A developer even managed to ferret out a link to a BiometrickKitUI bundle in one of the iOS 7 beta releases (which was quickly removed in the next beta).

Giving Security The Finger

I think adding a fingerprint scanner to the hardware of an iDevice is an awesome idea.  Passcode locks are good for a certain amount of basic device security, but the usefulness of a passcode is inversely proportional to it’s security level.  People don’t make complex passcodes because they take far too long to type in.  If you make a complex alphanumeric code, typing the code in quickly one-handed isn’t easy.  That leaves most people choosing to use a 4-digit code or forgoing it altogether.  That doesn’t bode well for people whose phones are lost or stolen.

Apple has already publicly revealed that it will include enhanced security in iOS 7 in the form of an activation lock that prevents a thief from erasing the phone and reactivating it for themselves.  This makes sense in that Apple wants to discourage thieves.  But that step only makes sense if you consider that Apple wants to beef up the device security as well.  Biometric fingerprint scanners are a quick method of inputting a unique unlock code quickly.  Enabling this technology on a new phone should show a sharp increase in the number of users that have enabled an unlock code (or finger, in this case).

Not all people thing fingerprint scanners are a good idea.  A link from Angelbeat says that Apple should forget about the finger and instead use a combination of picture and voice to unlock the phone.  The writer says that this would provide more security because it requires your face as well as your voice.  The writer also says that it’s more convenient than taking a glove off to use a finger in cold weather.  I happen to disagree on a couple of points.

A Face For Radio

Facial recognition unlock for phones isn’t new.  It’s been in Android since the release of Ice Cream Sandwich.  It’s also very easy to defeat.  This article from last year talks about how flaky the system is unless you provide it several pictures to reference from many different angles.  This video shows how snapping a picture on a different phone can easily fool the facial recognition.  And that’s only the first video of several that I found on a cursory search for “Android Facial Recognition”.  I could see this working against the user if the phone is stolen by someone that knows their target.  Especially if there is a large repository of face pictures online somewhere.  Perhaps in a “book” of “faces”.

Another issue I have is Siri.  As far as I know, Siri can’t be trained to recognize a users voice.  In fact, I don’t believe Siri can distinguish one user from another at all.  To prove my point, go pick up a friend’s phone and ask Siri to find something.  Odds are good Siri will comply even though you aren’t the phone’s owner.  In order to defeat the old, unreliable voice command systems that have been around forever, Apple made Siri able to recognize a wide variety of voices and accents.  In order to cover that wide use case, Apple had to sacrifice resolution of a specific voice.  Apple would have to build in a completely new set of Siri APIs that query a user to speak a specific set of phrases in order to build a custom unlock code.  Based on my experience with those kinds of old systems, if you didn’t utter the phrase exactly the way it was originally recorded it would fail spectacularly.  What happens if you have a cold?  Or there is background noise?  Not exactly easier that putting your thumb on a sensor.

Don’t think that means that fingerprints are infallible.  The Mythbusters managed to defeat an unbeatable fingerprint scanner in one episode.  Of course, they had access to things like ballistics gel, which isn’t something you can pick up at the corner store.  Biometrics are only as good as the sensors that power them.  They also serve as a deterrent, not a complete barrier.  Lifting someone’s fingerprints isn’t easy and neither is scanning them into a computer to produce a sharp enough image to fool the average scanner.  The idea is that a stolen phone with a biometric lock will simply be discarded and a different, more vulnerable phone would be exploited instead.


Tom’s Take

I hope that Apple includes a fingerprint scanner in the new iPhone.  I hope it has enough accuracy and resolution to make biometric access easy and simple.  That kind of implementation across so many devices will drive the access control industry to take a new look at biometrics and being integrating them into more products.  Hopefully that will spur things like home door locks, vehicle locks, and other personal devices to being using these same kind of sensors to increase security.  Fingerprints aren’t perfect by any stretch, but they are the best option of the current generation of technology.  One day we may reach the stage of retinal scanners or brainwave pattern matches for security locks.  For now, a fingerprint scanner on my phone will get a “thumbs up” from me.

Why I Won’t Be Getting Google Glass

google_glass_grey-580-90

You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.

About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.

I won’t be participating in generation one of Google Glass.

I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.

From Entry to Profit

Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time

Eye of the Beholder

One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.

Mind the Leash

According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command.  Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it?  There’s the rub in the first generation Glass units.  You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot.  I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype.  Many will stop me here and interject that WiFi hotspot access is fairly common now.  All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace.  How does that work from a mountain top?  What if I had a video that I wanted to post right away from the middle of the ocean?  How exactly do you livestream video while skydiving over the Moscone Center during Google I/O?  Here’s a hint:  You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air.  Not as magical when you strip all the layers away.  For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter.  Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.

Tom’s Take

Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.

If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com

iOS 7 and Labels

wwdc-13-apple-reveals-ios-7

Apple is prepping the release of iOS 7 to their users sometime in the next couple of months. The developers are already testing it out to find bugs and polish their apps in anticipation of the user base adopting Jonathan Ive‘s vision for a mobile operating system. In many ways, it’s still the same core software we’ve been using for many years now with a few radical changes to the look and feel. The icons and lack of skeumorphism are getting the most press. But I found something that I think has the ability to be even bigger than that.

The user interface (UI) elements in the previous iOS builds all look very similar. This is no doubt due to the influence of Scott Forestall, the now departed manager of iOS. The dearth of glossy buttons and switches looked gorgeous back in 2007 when the iPhone was first released. But all UI evolves over time. Some evolve faster than others. Apple hit a roadblock because of those very same buttons. They were all baked into the core UI. Changing them was like trying to correct a misspelled word in a stone carving.  It takes months of planning to make even the smallest of changes.  And those changes have to be looked at on a massive scale to avoid causing issues in the rest of the OS.

iOS 7 is different to me.  Look at this pic of an incoming call and compare it with the same screen in iOS 6:

iOS 7

iOS 7

iOS 6

iOS 6

The iOS 6 picture has buttons.  The iOS 7 picture is different.  Instead of have chiseled buttons, it looks like the Answer and Decline buttons have been stuck to the screen with labels.  That’s not the only place in the UI that has a label-like appearance.  Sending a new  iMessage or text to someone in the Messages app looks like applying a stamp to a piece of paper.  Taking all that into consideration, I think I finally understand what Ive is trying to do with this UI shift in iOS 7

Labels are easy to reapply.  You just peel them off and stick them back on.  Unlike the chiseled-in-stone button UI, a label can quickly and easily be reconfigured or replaced if it starts to look dated.  Apple made mention of this in Ive’s iOS 7 video where he talked about creating “hierarchical layers (to) establish order“.  Ive commented that this approach gives depth to the OS.  I think he’s holding back on us.

Jonathan Ive created UI layers in the OS so he can change them out more quickly.  Think about it.  If you only have to change a label in an app or change the way they are presented on screen, it allows you to make more rapid changes to the way the OS looks.  If the layers are consistent and draw from the same pool of resources, it allows you to skin the OS however you want with minimal effort.  Ive wasn’t just trying to scrub away the accumulation of Scott Forrestal’s ideas about the UI.  He wanted to change them and make the UI so flexible that the look can be updated in the blink of an eye.  That gives him the ability to change elements at will without the need to overhaul the system.  That kind of rapid configurability gives Apple the chance to keep things looking fresh and accommodate changing tastes.


Tom’s Take

I can almost hear people now saying that making future iOS releases able to be skinned is just another rip off of Android’s feature set.  In some ways, you are very right.  However, consider that Android was always designed with modularity in mind from the beginning.  Google wanted to give manufacturers and carriers the ability to install their own UI.  Think about how newsworthy the announcement of a TouchWiz-free Galaxy S4 was.  Apple has always considered the UI inviolate in all their products.  You don’t have much freedom to change things in iOS or in OS X.  Jonathan Ive is trying to set things up so that changes can be made more frequently in iOS.  Modders will likely find ways to insert their own UI elements and take these ideas in an ever more radical direction.  And all because Apple wanted to be able to peel off their UI pieces as easily as a label.

The Microsoft Office Tablet

OfficeTabletI’ve really tried to stay out of the Tablet Wars.  I have a first generation iPad that I barely use any more.  My kids have co-opted it from me for watching on-demand TV shows and playing Angry Birds.  Since I spend most of my time typing blog posts or doing research, I use my laptop more than anything else.  When the Surface RT and Surface Pro escaped from the wilds of Redmond I waited and watched.  I wanted to see what people were going to say about these new Microsoft tablets.  It’s been about 4 months since the release of the Surface Pro and simliar machines from vendors like Dell and Asus.  I’ve been slowly asking questions and collecting information about these devices.  And I think I’ve finally come to a realization.

The primary reason people want to buy a Surface tablet is to run Microsoft Office.

Here’s the setup.  Everyone that expressed an interest in the Pro version of the Surface (or the Latitude 10 from Dell) was asked a question by me: What is the most compelling feature for the Surface Pro for you?  The responses that I got back were overwhelming in their similarity.

1.  I want to use Microsoft Office on my tablet.

2.  I want to run full Windows apps on my tablet.

I never heard anything about portability, power, user interface, or application support (beyond full Windows apps).  I specifically excluded the RT model of the Surface from my questions because of the ARM processor and the reliance of software from the Windows App Store.  The RT functions more like Apple/Android tablets in that regard.

This made me curious.  The primary goal of Surface users is to be able to run Office?  These people have basically told me that the only reason they want to buy a tablet is to use an office suite.  One that isn’t currently available anywhere else for mobile devices.  One that has been rumored to be released on other platforms down the road.  While it may be a logical fallacy, it appears that Microsoft risks invalidating a whole hardware platform because of a single application suite.  If they end up releasing Office for iOS/Android, people would flee from the Surface to the other platforms according to the info above.  Ergo, the only purpose of the Surface appears to be to run one application.  Which I why I’ve started calling it the Microsoft Office Tablet.  Then I started wondering about the second most popular answer in my poll.

Making Your Flow Work

As much as I’ve tried not to use the word “workflow” before, I find that it fits in this particular conversation.  Your workflow is more than just the applications you utilize.  It’s how you use them.  My workflow looks totally different from everyone else even though I use simliar applications.  I use email and word processing for my own purposes.  I write a lot, so a keyboard of some kind is important to my workflow.  I don’t do a lot of graphics design, so a pen input tablet isn’t really a big deal to me.  The list goes on and on, but you see that my needs are my own and not those of someone else.  Workflows may be simliar, but not identical.  That’s where the dichotomy comes into play for me.

When people start looking at using a different device for their workflow, they have to make adjustments of some kind.  Especially if that device is radically different from one they’ve been using before.  Your phone is different from a tablet, and a tablet is different from a laptop.  Even a laptop is different from a desktop, but these two are more simliar than most.  When the time comes to adjust your workflow to a new device, there are generally two categories of people:

1.  People who adjust their workflow to the new device.

2.  People who expect the device to conform to their existing workflow.

For users of the Apple and Android tablets, option 1 is pretty much the only option you’ve got.  That’s because the workflow you’ve created likely can’t be easily replicated between devices.  Desktop apps don’t run on these tablets.  When you pick up an iPad or a Galaxy Tab you have to spend time finding apps to replicate what you’ve been doing previously.  Note taking apps, web browsing apps, and even more specialized apps like banking or ebook readers are very commonly installed.  Your workflow becomes constrained to the device you’re using.  Things like on-screen keyboards or lack of USB ports become bullet points in workflow compatibility.  On occasion, you find that a new workflow is possible with the device.  The prime example I can think of is using the camera on a phone in conjunction with a banking app to deposit checks without needing to take them into the bank.  That workflow would have been impossible just a couple of years ago.  With the increase in camera phone resolution, high speed data transfer, and secure transmission of sensitive data made possible by device advancements we can now picture this new workflow and easily adapt it because a device made it possible.

The other category is where the majority of Surface Pro users come in.  These are the people that think their workflow must work on any device they use.  Rather than modify what they’re doing, they want the perfect device to do their stuff.  These are the people that use a tablet for about a week and then move on to something different because “it just didn’t feel right.”  When they finally do find that magical device that does everything they want, they tend to abandon all other devices and use it exclusively.  That is, until they have a new workflow or a substantial modification to their existing workflow.  Then they go on the hunt for a new device that’s perfect for this workflow.

So long as your workflow is the immutable object in the equation, you are never going to be happy with any device you pick.  My workflows change depending on my device.  I browse Twitter and read email from my phone but rarely read books.  I read books and do light web surfing from a tablet but almost never create content.  I spend a lot of time creating content on my laptop buy hate reading on it.  I’ve adjusted my workflows to suit the devices I’m using.

If the single workflow you need to replicate on your table revolves around content creation, I think it’s time to examine exactly what you’re using a tablet for.  Is it portability beyond what a laptop can offer?  Do you prefer to hunt and peck around a touch screen instead of a keyboard?  Are you looking for better battery life or some other function of the difference in hardware?  Or are you just wanting to look cool with a tablet in the “post PC world?”  That’s the primary reason I don’t use a tablet that much any more.  My workflows conform to my phone and my laptop.  I don’t find use in a tablet.  Some people love them.  Some people swear by them.  Just make sure you aren’t dropping $800-$1000 on a one-application device.

At the end of the day, work needs to get done.  People are going to use whatever device they want to use to get their stuff done.  Some want to do stuff and move on.  Others want to look awesome doing stuff or want to do their stuff everywhere no matter what.  Use what works best for you.  Just don’t be surprised if complaining about how this device doesn’t run my favorite data entry program gets a sideways glance from IT.

Disclaimer:  I own a first generation iPad.  I’ve tested a Dell Latitude 10.  I currently use an iPhone 4S.  I also use a MacBook Air.  I’ve used a Lenovo Thinkpad in the past as my primary workstation.  I’m not a hater of Microsoft or a lover of Apple.  I’ve found a setup that lets me get my job done.